text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
Tectonic effect for establishing a semi-dynamic datum in Southwest Taiwan
Kuo-En Ching1 &
Kwo-Hwa Chen2
To accommodate the effects of crustal deformation in the current national static geodetic datum (Taiwan Geodetic Datum 1997 (TWD97)) in SW Taiwan, 221 campaign-mode global positioning system (GPS) stations from 2002 to 2010 were used in this study to generate a surface horizontal velocity model for establishing a semi-dynamic datum in SW Taiwan. An interpolation method, Kriging, and a tectonic block model, DEFNODE, were used to construct the surface horizontal velocity model. Forty-four continuous GPS stations were used to examine the performance of the semi-dynamic datum through exterior validation. The average values of the residual errors obtained using the Kriging method for the north and east components are ±1.9 and ±2.2 mm/year, respectively, whereas those obtained using the block model are ±2.0 and ±2.9 mm/year, respectively. The distribution of residuals greater than 5 mm/year for both models generally corresponds to a high strain rate area derived using the horizontal velocity field. In addition, these residuals may result from deep-seated landslide and active folding or mud diapir in a mudstone area. Similar exterior checking results obtained using the Kriging interpolation method and block model for SW Taiwan indicate a high station density and a relatively satisfactory station spatial coverage. However, the block model is superior to the Kriging method due to the consideration of characteristics of the geological structure in the block model. In addition, result from traditional coordinate transformation was used to compare with the semi-dynamic datum. The results indicate that a semi-dynamic datum is a feasible solution for maintaining the accuracy of TWD97 at an appropriate level over time in Taiwan.
Taiwan is located at the present-day plate convergence boundary between the Eurasian plate and the Philippine Sea plate (Fig. 1); earthquake activities are abundant and the convergence rate is approximately 82 mm/year across the island, reflecting a high surface strain rate of up to 1 μstrain/year (Yu et al. 1997; Bos et al. 2003; Chang et al. 2003; Byrne et al. 2011). However, the current Taiwan Geodetic Datum (Taiwan Geodetic Datum 1997 (TWD97)), announced in 2012, is considered a static geocentric datum, and it complied with the International Terrestrial Reference Frame 1994 (ITRF94) in the first epoch of 2010 (MOI 2012). The baseline accuracy in a horizontal component aimed at TWD97 is 2.0 mm + 1 ppm × k, where k is the length of the baseline, which corresponds to approximately 0.05 m horizontally. In other words, this datum will be distorted and loose its accuracy after 4–5 years because of the presence of a velocity gradient of up to 1 μstrain/year across the geodetic network. Therefore, the main concern in constructing a modern national geodetic datum in Taiwan is determining how to accommodate the effects of crustal deformation to maintain a spatial accuracy of its coordinates during a long period.
Horizontal velocity field in SW Taiwan with respect to the station, S01R. The 95 % confidence error ellipse is shown at the tip of each velocity vector. Blue vectors are derived from the continuous GPS data and black vectors are inferred from campaign-mode GPS data. Red lines are locations of active faults. 1 The Liuchia–Muchiliao fault system (LCMF), 2 the Chukou fault (CKUF), 3 the Houchali fault (HCLF), 4 the Hsinhua fault (HHAF), 5 the Tsochen fault (TCNF), 6 the Hsiaokangshan fault (HKSF), 7 the Chishan fault (CHNF), 8 the Chaochou fault (CCUF), 9 the Fengshan transfer fault (FTFZ), and 10 the Kaoping fault (KPT). Thick red lines are the active faults emphasized in this study. Inset is the Taiwan tectonics. I Manila trench, II Ryukyu trench, III deformation front, and IV Okinawa trough
In general, establishing a semi-dynamic datum by assigning deformation models to a specific geodetic datum, such as the Japanese Geodetic Datum 2000 (Tanaka et al., 2007) and the New Zealand Geodetic Datum 2000 (Grant and Blick 1998; Beavan and Haines 2001; Beavan and Blick 2007), is an appropriate means for sustaining the coordinate spatial accuracy for a national-based geodetic datum at the plate boundary to account for the high surface strain (Tregoning and Jackson 1999). The coordinate accuracy requirements have been achieved over time by using this deformation model. The long-term surface velocity resulting from interseismic plate tectonic loading and permanent surface displacement caused by discrete events, such as an earthquake, are generally included in the deformation model to reflect the true deformation field with adequate accuracy and resolution (Beavan and Blick 2007). Therefore, constructing a semi-dynamic datum is a solution for maintaining the accuracy of TWD97 at an appropriate level over time in Taiwan.
More than 750,000 people reside in SW Taiwan, and a high velocity gradient has been observed in this area (Ching et al. 2007, 2011b) (Fig. 1). Previous studies have inferred a high contraction rate of approximately 1.0 μstrain/year with a clockwise rotation of 14.5°–27.1°/myr, and right-lateral shearing from sparse global positioning system (GPS) horizontal velocities in the fold-and-thrust belt of SW Taiwan (Bos et al. 2003; Chang et al. 2003; Ching et al. 2007, 2011b; Hsu et al. 2009). In addition, the earthquake record from the Central Weather Bureau of Taiwan indicates that no M L > 6.0 earthquakes have occurred in our study area since 1900, except for the 2003 M w 6.8 Chengkung earthquake in eastern Taiwan and the 2010 M w 6.4 Jiasian earthquake occurred close to the study area. Most of the coseismic displacements caused by those two earthquakes are smaller than 5 mm (e.g., Ching et al. 2011a). In other words, no major postseismic deformation has disturbed the secular motion in this region. Consequently, SW Taiwan is an excellent area for establishing a semi-dynamic datum by estimating a horizontal velocity field to maintain the coordinate accuracy of sites.
The purpose of this paper is to provide a horizontal velocity model based on 265 GPS observations for estimating and predicting the coordinate changes associated with the horizontal crustal motion in SW Taiwan (Fig. 1). The interpolation method, the tectonic block model, and traditional coordinate transformation were used to analyze the accuracy in estimating and predicting the coordinate changes at arbitrary sites. This paper presents a comparison of efficiency between the interpolation method and block model.
Geological background
The major geological characteristic in SW Taiwan is the thick Plio-Pleistocene mudstone, Gutingkeng Formation. The Gutingkeng Formation dominates the study area, and its presence is proposed to be responsible for the nonoccurrence of large earthquakes in SW Taiwan since the last century. The Gutingkeng Formation consists of gray sandy siltstone and sandy mudstone intercalated with lenticular greywacke and subgraywacke with abundant Mollusca (Chou 1971).
Six major active faults, the Liuchia–Muchiliao fault system (LCMF), the Hsinhua fault (HHAF), the Houchiali fault (HCLF), the Hsiaokangshan fault (HKSF), the Chishan fault (CHNF), and the Fengshan transfer fault zone (FTFZ) from north to south (Fig. 1), account for the surface motion observed in this study. For the LCMF (number 1 in Fig. 1), although this fault system has remarkable landscape feature based on studies of aerial photographs, the evidence of separation in strata is absence in the field (Sun 1971). The northern segment of the LCMF (the Muchiliao fault) is a blind thrust and might have reactivated during Late Quaternary (Shyu et al. 2005). The southern segment of the LCMF (the Liuchia fault) is also a thrust fault and cuts across Holocene sediments. For the HHAF (number 4 in Fig. 1), the length of unambiguous surface rupture associated with the 1946 M L 6.3 Hsinhua earthquake is about 6 km (Bonilla 1977). The high dip angle of about 70° to 90° is deduced by the surface trenching data (Lee et al. 2000) and the focal mechanism solution of the 1946 Hsinhua earthquake (Cheng and Yeh 1989). For the HCLF (number 3 in Fig. 1), a normal fault associated with the growth of diapiric fold has first been proposed using the seismic gravity, drilling well and shallow seismic survey data (Hsieh 1972; Kuo et al. 1998). In addition, a reverse fault has also been proposed using the geophotograph technique (Sun 1964) and the repeated geodetic survey and D-InSAR results (Fruneau et al. 2001; Huang et al. 2006, 2009; Mouthereau et al. 2002). For the HKSF (number 6 in Fig. 1), an obvious fault scarp indicates that the length of the NNE-striking HKSF is approximately 8 km and it dips to the east (Hsu and Chang 1979; Sun 1964). For the CHNF (number 7 in Fig. 1), the NE-trending CHNF is a reverse fault with left-lateral components according to slickenside analysis along the fault plane (Chen 2005). However, the fault analysis suggests a normal displacement on the CHNF (Gourley et al. 2012). In addition, GPS measurements indicate that the CHNF is a reverse fault with a right-lateral strike-slip component (Ching et al. 2007, 2011b; Hu et al., 2007; Lacombe et al., 2001). For the FTFZ (number 9 in Fig. 1), the N140°E-striking FTFZ has been detected by analyzing a geomorphic feature acquired from the Digital Elevation Model (DEM) (Deffontaines et al. 1994; 1997; Lacombe et al. 1999) and by conducting GPS surface velocity analysis (Ching et al. 2007).
GPS data acquisition and processing
GPS data
GPS observations from 221 campaign-mode GPS stations used in this study were installed by the Central Geological Survey and 44 continuously operating GPS stations established by the Central Weather Bureau, the Academia Sinica, and the Central Geological Survey in the study area in SW Taiwan from 2002 to 2010 (Fig. 1). GPS surveys are generally conducted annually. A benchmark is usually occupied by more than two sessions per year. Each session is 6–14 h, and all available satellites are tracked and rising higher than a 15° elevation angle. The sampling interval for data logging is 15 s. The 44 continuous GPS stations are used for examining the accuracy of the horizontal velocity model established by the 221 campaign-mode GPS stations in the study area.
The campaign surveying and continuously recorded GPS data were processed session by session and date by date, respectively, by using Bernese software v.5.0 (Dach et al. 2007) to obtain precise station coordinates. Precise ephemerides provided by the International GNSS Service (IGS) were used and fixed during processing. Five global IGS fiducial stations surrounding Taiwan (IRKT, TSKB, GUAM, PERT, and IISC) on the International Terrestrial Reference Frame (ITRF2008) (Altamimi et al. 2011) were used to determine the coordinates of all campaign-mode and continuous GPS stations in the study area. The horizontal uncertainties of coordinates were estimated to be 2–5 mm.
Horizontal velocity field in SW Taiwan
The GPS horizontal velocities were estimated on the basis of the coordinate time series in a time span of 9 years from 2002 to 2010. The horizontal velocities for the east and north components were estimated using least squares estimation (LSE). An empirical equation y = a + bt + Hc for the study area was used to determine the standard deviation of the horizontal velocity field, where y is the coordinate in each component, a is the offset from origin, b is the velocity in each component, c is the coseismic offset of the 2003 M w 6.8 Chengkung earthquake in eastern Taiwan and the 2010 M w 6.4 Jiasian earthquake, and H is a step function (Fig. 2). The formal uncertainties (σ ls) obtained using the LSE of velocities were scaled by the amount k = (mis/2)2 (Ching et al. 2011b), where k is the effect of daily coordinate variation presented in the coordinate time series and mis is the residual of calculations and observations for the time series for the east and north components, respectively. The velocity uncertainty σ for each component was re-estimated using the equation σ = (σ ls 2 × k)1/2 (Ching et al. 2011b).
Selected coordinate time series. Blue and red circles are time series in north and east components, respectively. Black lines are data fitting. Two dashed lines are the epoches of the the 2003 M w 6.8 Chengkung earthquake and the 2010 M w 6.4 Jiasian earthquake
GPS horizontal velocities are relative to the station S01R at Penghu Island in the Chinese continental margin (Fig. 1). The horizontal velocities east of the CHNF are approximately 66 mm/year, N270°. The velocities gradually decrease westward to approximately 15 mm/year, N259°. The azimuths of velocities exhibit pseudo-counterclockwise rotation from EW to WSW. Most azimuths of velocities in the northern area are almost westward. However, the azimuths in the southern area rotate from nearly W (approximately 270°) to WSW (approximately 255°). A velocity gradient appears between the LCMF–HHAF–HCLF and the CHNF (Fig. 1).
Establishment of surface velocity model
In this study, the interpolation method and tectonic block model were used to construct a surface horizontal velocity model. In general, the spatial variation of surface horizontal velocity is caused by the interaction between fault coupling and plate motion (e.g., McCaffrey 2002). A tectonic block which is a physical model containing the conservation of momentum and geological constraints (e.g., McCaffrey 2002) is therefore adopted in this study. On the other hand, a continuous surface velocity field is shown in our study area because most of active faults are locked at the surface. Hence, an interpolation method which is mathematical method is also properly adopted to obtain a surface velocity model in this area.
Interpolation method
Various interpolation methodologies have been developed to construct new data points within the range of a discrete set of known data points using by curve fitting or regression analysis. Because the surface velocities containing a geographic feature are noisy data, a non-linear interpolation is needed not only for fitting an interpolant that passes through the given data points but also for regression. The Kriging interpolation which is mathematically closely related to regression analysis has been developed in geographic statistics for the estimation and prediction of spatial data (e.g., Roush et al. 2003; Samsonov and Tiampo 2006; van Dalfsen et al. 2006; Samsonov et al. 2007; Miura 2010; Majdański 2012). In the Kriging method, four frequently used semi-variance functions, exponential, Gaussian, linear, and spherical models, related to the distances among sites can be employed to express the spatial variation, and they minimize the error of predicted values which are estimated by spatial distribution of the predicted values (Wackernagel 2003). Therefore, the Kriging interpolation is not dependent on given data points but on the data configuration and variogram parameters (Goovaerts 1997; van Dalfsen et al. 2006). Analyzing the root mean square (RMS; Eq. 1) of the differences between modeled and observed velocities at the 44 continuous GPS stations revealed that the optimal function is the exponential model. The results of RMS analyses for the four aforementioned semi-variance functions are shown in Table 1. The optimal performance relative to other functions is obtained using the exponential model (Eqs. 2 and 3).
Table 1 RMS analyses of the four semi-variance functions of Kriging interpolation method at 44 continuous GPS stations
$$ \mathrm{R}\mathrm{M}\mathrm{S}= \pm \sqrt{\frac{{\displaystyle {\sum}_{i=1}^n}{\left({v}_i^{\mathrm{model}}-{v}_i^{\mathrm{observed}}\right)}^2}{n}}, $$
where n is the number of sites and \( {v}_i^{\mathrm{model}} \) and \( {v}_i^{\mathrm{observed}} \) are the −modeled and observed velocities at the 44 continuous GPS stations, respectively.
$$ \gamma (h) = {C}_0+C\cdot \left[1\ \hbox{--}\ \exp \Big(-h/a\right)\Big],\kern0.75em \mathrm{if}\kern0.75em -h > 0 $$
$$ \gamma (h) = 0,\kern0.75em \mathrm{if}\kern0.5em -h = 0 $$
where h is the lag distance between data point pairs, C 0 is a nugget effect, C is a scale, and a is a range.
The horizontal velocities of the 221 campaign-mode GPS stations between 2002 and 2010 were employed with an assumption of equal weighting, and a horizontal velocity model with grids with 2.5-km spacing (approximately 90 s in both latitude and longitude) in the study area (70 km (E-W) and 80 km (N-S)) was generated using the Kriging method, and the exponential model with the range a = 15 km (Eq. 2) due to the average of distances among the 221 campaign-mode GPS sites is approximately 5 km (Fig. 3). The critical semi-variance was set as C 0 + C (Eq. 2 if h→∞) as the points were located at the exterior area of the observed sites, that is, 221 campaign-mode GPS stations. Moreover, to calculate the velocities at arbitrary points, the gridded velocities at the four points that form the cell in which the point lies were again interpolated using the bilinear method.
Horizontal velocity model derived from the Kriging interpolation method in SW Taiwan. Black vectors are observations and the 95 % confidence error ellipse is shown at the tip of each velocity vector. Green vectors are derived from the Kriging interpolation method. Red lines are locations of active faults. Thick red lines are the active faults emphasized in this study
Tectonic block model
The interseismic velocity field at the plate boundary zone is generally dominated by tectonic block rotations and interseismic coupling on faults (Savage and Simpson 1997). Based on this concept, an elastic kinematic block model with the code DEFNODE, developed by McCaffrey (2002), was adopted in this study to construct a surface horizontal velocity model for SW Taiwan. The nonlinear inversion in this program involved applying simulated annealing to downhill simplex minimization (Press et al. 1989) to invert GPS horizontal velocities simultaneously for Euler pole locations and the angular velocities of tectonic blocks and coupling coefficients on the block-bounding faults. The coupling coefficient is used to describe the velocity gradient, which is caused by the friction on the fault plane, across the fault. In this inversion, a constraint that decreases the coupling coefficient down-dip from being totally stuck at the surface to totally creeping at the bottom of the fault was imposed because most terrestrial faults in SW Taiwan exhibit no clear evidence of aseismic surface creeping. The contribution of fault coupling to the velocities was calculated using the formulations of Okada (1985) in an elastic half-space material. The best fit parameters were determined by minimizing data misfit, defined by the reduced chi-square statistic (χ 2) (McCaffrey 2002). A χ 2 value considerably greater than 1 indicates a relatively poor fit for the model, and a value close to 1 indicates an acceptable model fit.
Tectonic blocks
Eleven elastic tectonic blocks in SW Taiwan were identified according to the concept proposed by Ching et al. (2011b) (Fig. 4). Because the trends of the GPS velocity gradients are generally parallel to the strikes of active faults in this region (red lines in Fig. 1), the block boundaries (black lines in Figs. 4 and 5) are mainly selected by determining whether the locations of GPS velocity gradients and active faults match. The blocks used in this study are closed, spherical polygons on the Earth's surface and cover the entire model domain. All points within a block are assumed to rotate with the same angular velocity.
Horizontal velocity model derived from the block model in SW Taiwan. Black vectors are observations and the 95 % confidence error ellipse is shown at the tip of each velocity vector. Green vectors are derived from the block model. Red lines are locations of active faults. Thick red lines are the active faults emphasized in this study. Black lines denote the block boundaries
Locations of the Euler poles, block motions, and distribution of the coupling ratio (from 0 to 1) along the fault planes. Solid dots show the locations of the Euler poles and their 95 % confidence error ellipse are shown with black ellipses
Fault configurations
The contribution of coupling on ten active faults to surface velocities in SW Taiwan was estimated according to the concept proposed by Ching et al. (2011b) in this study (Figs. 1 and 5). Faults with a slip rate are represented by a series of node points with 3-D shapes within the spherical Earth (McCaffrey 2002) so that the fault dip and curvature are approximated. The fault geometries in the structural model, mainly modified by Ching et al. (2011b), are fixed in this model (Fig. 5). Faults are assumed to extend to a depth of 5–10 km, below which they creep at the full relative plate velocity. The exact dip angles for most reverse faults remain undetermined, and the lower edges of east-dipping thrusts in SW Taiwan increase from west to east with a depth from 5 to 7 km and with dip angles from approximately 25° to approximately 45°. In addition, because the depths and dip angles of two nearly E-W-striking strike-slip faults are still unclear, the fault depths of the Hsinhua (number 4 in Fig. 1) and Fengshan transfer faults (number 9 in Fig. 1) were set as 10 km, and the nearly vertical fault planes (dip angles between 90° and 70°) were assigned on the basis of limited balanced cross sections (Huang et al. 2004).
Kriging method interpolating results
The RMSinterior, denoting the residual errors of the horizontal velocities of the 221 campaign-mode GPS stations, was evaluated using Eq. 1. The average RMSinterior is ±0.3 mm/year for both north and east components (Fig. 6a). Model residuals are generally lower than ±2.0 mm/year (Fig. 6a). However, the residual for the CHNF area is greater than ±3.0–4.0 mm/year (number 7 in Figs. 1 and 6a).
Distribution of velocity residuals in SW Taiwan. Black vectors are residuals of the horizontal velocities (RMSinterior) derived from the 221 campaign-mode GPS stations. Blue vectors are the residual errors of horizontal velocities (RMSexterior) from 44 continuous GPS stations. Red lines are locations of active faults. Thick red lines are the active faults emphasized in this study. Colored map is the strain rate field derived from horizontal velocities. a Velocity residuals inferred from the Kriging method. b Velocity residuals inferred from the block model. Black lines denote the block boundaries
Tectonic block modeling results
The preliminary locations of Euler poles for each tectonic block were calculated by inverting horizontal velocities without considering the effect of elastic strain because of the locking on the faults. The locations of Euler poles and the angular velocities of the aforementioned inversions serve as a priori information in inversions for motions of tectonic blocks and interseismic coupling on faults (Fig. 5).
The reduced χ 2 statistic of the optimized model is 1.29. Model residuals are generally lower than 2.0 mm/year (Fig. 6b). However, the residual around the CHNF region is greater than 4.0 mm/year (number 7 in Figs. 1 and 6b). The RMSinterior residual errors of the 221 campaign-mode GPS stations are ±2.8 and ±2.0 mm/year for the east and north components, respectively.
Coordinate transformation
Conventionally, the two-dimensional coordinate transformations are used to transform the coordinates of stations from the observed time t 0 to a specific epoch t 1 by applying a four-parameter transformation (Helmert transformation) (Ghilani 2010) or six-parameter transformation (affine transformation) (Chang 2004) to reduce the horizontal distortions caused by the crustal deformation in a local area. The six-parameter transformation is more commonly used than the four-parameter transformation because it uses more geometric parameters to absorb the horizontal deformation. Therefore, six-parameter transformation was adopted in this study.
The six-parameter transformation (Eqs. 4 and 5) uses 33 control points (Fig. 6a), which were selected from the 221 campaign-mode GPS stations in the study area (Fig. 1) by using the LSE method.
$$ x^{\prime }= ax+by+f $$
$$ y^{\prime }=cx+ dy+g, $$
where (x,y) and (x′,y′) are the coordinates of the control points in the epochs t 0 and t 1, respectively, and a, b, c, d, f, and g are the six parameters of the coordinate transformation.
To avoid the contamination of coseismic displacements resulting from the 2003 M w 6.8 Chengkung earthquake and the 2010 M w 6.4 Jiasian earthquake, the period adopted for the coordinate transformation experiment was 2004–2009. Therefore, the six parameters were evaluated for five periods, 2004–2005, 2004–2006, 2004–2007, 2004–2008, and 2004–2009, by using the LSE method.
In this study, the observed coordinates of the 44 continuous GPS stations in the study area (Fig. 1) were compared with their corresponding transformed coordinates to analyze the accuracy of the coordinate transformation. The RMS was evaluated using Eq. 1 for the residuals in different epochs (Table 2 and Fig. 7). The RMS of the residuals quickly increased from ±7.8 mm in 2004–2005 to ±19.1 mm in 2004–2009. Based on these results, coordinate transformation is not an appropriate method for reducing the distortion caused by crustal deformation between different time epochs in a local area. The longer the time interval is, the greater the residuals of the coordinate transformation are. Moreover, the distribution of control points and surface deformation are the main limitations of coordinate transformation. In a local area, uniformly distributed control points and uniform surface deformation are essential for achieving a high-quality coordinate transformation. The 33 control points used in this study were not uniformly distributed (Fig. 7a). Therefore, coordinate transformation could not satisfactorily reduce the horizontal distortions in this study.
Table 2 RMS of the residuals of the observed and transformed coordinates of 44 continuous GPS stations in five periods (mm)
The coordinate residuals of the 44 continuous GPS stations in different epochs using the coordinate transformation. Red lines are locations of active faults. Thick red lines are the active faults emphasized in this study. a Distribution of 33 selected control points for the coordinate transformation. b–f The coordinate residuals in different epochs. Blue vectors are the residual errors of 44 continuous GPS stations
Precision of velocity models
In this study, the Kriging interpolation method and block model were used to establish the horizontal velocity model for SW Taiwan. The average residual RMSinterior of the Kriging method is considerably lower than that of the block model. Therefore, performance evaluations of the two methods were conducted to determine which is more suitable for establishing a surface horizontal velocity model for SW Taiwan.
First, the signals in surface deformation are composed of the tectonic block motion, coupling coefficients due to friction on the fault plane, seasonal variation, deep-seated landslide, volcanic activity, artificial groundwater withdrawal, and so on (e.g., Murray-Moraleda 2011). The temporal variations in the coordinate time series are stable from some of aforementioned mechanisms, such as the tectonic block motion and locking on the fault plane caused by tectonic loading. The first-order pattern of a horizontal velocity field generally represents tectonic movements. However, the temporal variation of the coordinate time series resulting from deep-seated landslide or groundwater pumping may not be stable (Fig. 8). Those mechanisms temporarily change the orientation of the velocity vector. Block model of velocity tend to consist of tectonic block motions and interseismic coupling on faults. In other words, modeling events caused by inelastic deformation, such as folding, deep-seated landslide, and artificial groundwater pumping, is difficult. By contrast, the Kriging interpolation method attempts to minimize prediction errors, which are themselves estimated (Oliver and Webster 1990). Therefore, the performance of the Kriging method in minimizing residuals is superior to that of the block model.
Temporal variation of the coordinate residuals of the 44 continuous GPS stations in different epochs using the static datum, coordinate transformation, block model, and Kriging interpolation method
Next, to evaluate the precision performance of the two methods, 44 continuous GPS stations observed from 2002 to 2010 in the study area were adopted for exterior checking (blue arrow lines in Figs. 1 and 6). The RMSexterior values of the residual errors of horizontal velocities from 44 continuous GPS stations were calculated using Eq. 1. The RMSexterior values of the Kriging method for the north and east components are ±1.9 and ±2.2 mm/year, respectively, and those of the block model are ±2.0 and ±2.9 mm/year, respectively (Fig. 6). The RMSexterior of the two models is comparable in value and spatial distribution (Fig. 6). In addition, the RMSinterior and RMSexterior of the block model are almost consistent. Compared with the Kriging method, the block model is a physical model and is constrained by geological parameters, such as the locations of faults, dips of faults, and sense of fault slips. Therefore, under a suitable geological constraint, the RMSinterior and RMSexterior of the block model are consistent. However, for the Kriging method, favorable RMSexterior values of ±1.9 and ±2.2 mm/year for the north and east components, respectively, are based on dense spatial coverage of the horizontal velocity field.
Although the Kriging method can easily establish a velocity model, a high station density (e.g., the approximately 5 km spacing in this study area) is required for providing a satisfactory spatial coverage condition for interpolation in SW Taiwan. In addition, a long data-acquisition duration is also required to minimize contamination from inelastic deformation or artificial sources. Conversely, constructing a block model of velocity is relatively difficult because more geological information is required to build the block boundaries and fault parameters (McCaffrey 2002). However, fewer observations are required in the tectonic block model because of the physical constraints originating from geological investigations.
Finally, according to the patterns in the RMSexterior of the Kriging method and RMSinterior and RMSexterior of the block model (Fig. 6), the distribution of residuals greater than 5 mm/year along blocks B, C, D, E, F, and H of the block model (Fig. 6b) generally corresponds to the high strain rate area derived from the horizontal velocity field, particularly for vectors in blocks D, E, F, and H (Fig. 6). Notably, orientations of some velocity vectors in blocks B and C, such as those of stations S336 and G126, are sub-perpendicular to the strikes of mountains (Fig. 4). Although there is no strong evidence proving that these vectors are contaminated by the deep-seated landslide, the poor data fitting of the block model in blocks B and C and the orientations of these vectors sub-normal to the strikes of mountains may imply that deep-seated landslide disturbs the horizontal velocities. In addition, folding or mud diapir in a mudstone area, which acts as a non-recoverable, inelastic strain, is a crucial structural feature in blocks D, E, F, and H (Hsieh 1972; Lacombe et al. 1997, 2004; Pan 1968; Shih 1967) (Fig. 6). Therefore, modeling surface deformation caused by active folding or mud diapir in a mudstone area is difficult by using an elastic block model. In addition to the possibilities of deep-seated landslide and active fold or mud diapir in a mudstone area, these residuals may be attributable to poorly defined block boundaries or unclear fault parameters, because a detailed geological investigation is still required to clarify the present-day characteristics of active structures.
A national geodetic datum is crucial in studying Earth science, establishing basic infrastructure, developing technology, and conducting academic analyses in a country. Therefore, to consider the tectonic effect in the Taiwan national geodetic datum, 221 campaign-mode GPS observations from 2002 to 2010 were employed to establish a surface horizontal velocity model for SW Taiwan by using the Kriging method and block model. According to exterior checking of the 44 continuous GPS stations in the study area, the averages of the residual errors (RMSexterior) obtained using the Kriging method for the north and east components are ±1.9 and ±2.2 mm/year, respectively, and those obtained using the block model are ±2.0 and ±2.9 mm/year, respectively. Similar exterior checking results regarding the Kriging interpolation method and block model in SW Taiwan indicated a high station density and a relatively satisfactory station spatial coverage in the study area. However, the block model is more favorable than the Kriging method while lack of observation sites due to the consideration of characteristics of the geological structure.
Traditional coordinate transformation was used to compare with the coordinate precision of semi-dynamic datum. Comparing the Kriging method and block model revealed that the RMS of the residuals of the coordinate transformation rapidly increased from ±7.8 in 2005 to ±19.1 in 2009 (Fig. 8). Therefore, coordinate transformation is not an appropriate method for eliminating distortion over a long time in a deformed area.
In this study, only surface horizontal motions were considered to establish a horizontal velocity model for the semi-dynamic datum. The effects of coseismic and postseismic deformation were not considered. Therefore, in the future, crustal deformation models should include the surface vertical velocity field, coseismic displacements, and postseismic deformation caused by major earthquakes.
TWD97:
Taiwan Geodetic Datum 1997
ITRF94:
International Terrestrial Reference Frame 1994
ITRF2008:
LCMF:
Liuchia–Muchiliao fault system
HHAF:
the Hsinhua fault
HCLF:
the Houchiali fault
HKSF:
the Hsiaokangshan fault
CHNF:
the Chishan fault
FTFZ:
the Fengshan transfer fault zone
DEM:
Digital Elevation Model
IGS:
International GNSS Service
LSE:
least squares estimation
RMS:
Altamimi Z, Collilieux X, Métivier L (2011) ITRF2008: an improved solution of the international terrestrial reference frame. J Geod 85:457–473
Beavan J, Blick G (2007) Limitations in the NZGD2000 deformation model. Dyn Planet 130:624–630
Beavan J, Haines J (2001) Contemporary horizontal velocity and strain-rate fields of the Pacific-Australian plate boundary zone through New Zealand. J Geophys Res 106:741–770
Bonilla MG (1977) Summary of Quaternary faulting and elevation changes in Taiwan. Mem Geol Soc China 2:43–56
Bos AG, Spakman W, Nyst MCJ (2003) Surface deformation and tectonic setting of Taiwan inferred from a GPS velocity field. J Geophys Res 108:2458. doi:10.1029/2002JB002336
Byrne T, Chan YC, Rau RJ, Lu CY, Lee YH, Wang YJ (2011) The arc-continent collision in Taiwan. doi: 10.1007/978-3-540-88558-0_8
Chang HT (2004) Arbitrary affine transformation and their composition effects for two-dimensional fractal sets. Image Vis Comput 22(13):1117–1127
Chang CP, Chang TY, Angelier J, Kao H, Lee JC, Yu SB (2003) Strain and stress field in Taiwan oblique convergent system: constraints from GPS observations and tectonic data. Earth Planet Sci Lett 214:115–127
Chen B C (2005) A study on the southern part of Chishan fault. Dissertation, National Cheng Kung University
Cheng SN, Yeh YT (1989) Catalog of the earthquakes in Taiwan from 1604 to 1988. Academia Sinica, Taipei
Ching KE, Rau RJ, Lee JC, Hu JC (2007) Contemporary deformation of tectonic escape in SW Taiwan from GPS observations, 1995–2005. Earth Planet Sci Lett 262:601–619
Ching K E, Johnson K M, Rau R J, Chuang R Y, Kuo L C, Leu P L (2011a) Inferred fault geometry and slip distribution of the 2010 Jiashian, Taiwan, earthquake is consistent with a thick-skinned deformation model. Earth Planet Sci Lett 301: 78–86
Ching K E, Rau R J, Johnson K M, Lee J C, Hu J C (2011b) Present-day kinematics of active mountain building in Taiwan from GPS observations during 1995–2005. J Geophys Res 116 B09405. doi:10.1029/2010JB008058
Chou JT (1971) A preliminary study of the stratigraphy and sedimentation of the mudstone formations in the Tainan area, southern Taiwan. Petrol Geol Taiwan 8:187–219
Dach R, Hugentobler U, Fridez P, Meindl M (2007) Bernese GPS Software Version 5.0. University of Berne
Deffontaines B, Lee JC, Angelier J, Carvalho J, Rudant JP (1994) New geomorphic data on the active Taiwan orogen: a multisource approach. J Geophys Res 99:20243–20266
Deffontaines B, Lacombe O, Angelier J, Chu HT, Mouthereau F, Lee CT, Deramond J, Lee JF, Yu MS, Liew PM (1997) Quaternary transfer faulting in the Taiwan Foothills: evidence from a multisource approach. Tectonophysics 274:61–82
Fruneau B, Pathier E, Raymond D, Deffontaines B, Lee CT, Wang HT, Angelier J, Rudant JP, Chang CP (2001) Uplift of Tainan foreland (SW Taiwan) revealed by SAR Interferometry. Geophys Res Lett 28:3071–3074
Ghilani CD (2010) Adjustment computations: spatial data analysis. John Wiley & Sons, New Jersey
Goovaerts P (1997) Geostatistics for Natural Resources Evaluation. Oxford University Press: 496
Gourley J R, Lee YH, Ching KE (2012) Vertical fault mapping within the Gutingkeng Formation of southern Taiwan: implications for sub-aerial mud diapir tectonics. In: Abstract of the AGU Fall Meeting, San Francisco, Calif., 3–7 Dec 2012
Grant DB, Blick GH (1998) A new geocentric datum for New Zealand (NZGD2000). New Zealand Surveyor 288:40–42
Hsieh SH (1972) Subsurface geology and gravity anomalies of the Tainan and Chungchou structure of the coastal plain of southwestern Taiwan. Petrol Geol Taiwan 10:323–338
Hsu TL, Chang HC (1979) Quaternary faulting in Taiwan. Mem Geol Soc China 3:155–165
Hsu YJ, Yu SB, Simons M, Kuo LC, Chen HY (2009) Interseismic crustal deformation in the Taiwan plate boundary zone revealed by GPS observations, seismicity, and earthquake focal mechanisms. Tectonophysics 479:4–18
Hu JC, Hou CS, Shen LC, Chan YC, Chen RF, Huang C, Rau RJ, Chen KH, Lin CW, Huang MH, Nien PF (2007) Fault activity and lateral extrusion inferred from velocity field revealed by GPS measurements in the Pingtung area of southwestern Taiwan. J Asian Earth Sci 31:287–302
Huang ST, Yang KM, Hung JH, Wu JC, Ting HH, Mei WW, Hsu SH, Lee M (2004) Deformation front development at the northeast margin of the Tainan basin, Tainan-Kaohsiung area, Taiwan. Mar Geophys Res 25:139–156
Huang MH, Hu JC, Hsieh CS, Ching KE, Rau RJ, Pathier E, Fruneau B, Deffontaines B (2006) A growing structure near the deformation front in SW Taiwan as deduced from SAR interferometry and geodetic observation. Geophys Res Lett 33:L12305. doi:10.1029/2005GL025613
Huang MH, Hu JC, Ching KE, Rau RJ, Hsieh CS, Pathier E, Fruneau B, Deffontaines B (2009) Active deformation of Tainan tableland of southwestern Taiwan based on geodetic measurements and SAR interferometry. Tectonophysics 466:322–334
Kuo HY, Wang CY, Chiu GT, Lee CY (1998) Houchiali Fault: an Active Fault? 7th Conference of Geophysics Society 429–437, Taiwan (in Chinese)
Lacombe O, Angelier J, Chen HW, Deffontaines B, Chu HT, Rocher M (1997) Syndepositional tectonics and extension-compression relationships at the front of the Taiwan collision belt: a case study in the Pleistocene reefal limestones near Kaohsiung, SW Taiwan. Tectonophysics 274:83–96
Lacombe O, Mouthereau F, Deffontaines B, Angelier J, Chu HT, Lee CT (1999) Geometry and Quaternary kinematics of fold-and-thrust units of southwestern Taiwan. Tectonics 18:1198–1223
Lacombe O, Mouthereau F, Angelier J, Deffontaines B (2001) Structural, geodetic and seismological evidence for tectonic escape in SW Taiwan. Tectonophysics 333:323–345
Lacombe O, Angelier J, Mouthereau F, Chu HT, Deffontaines B, Lee JC, Rocher M, Chen RF, Siame L (2004) The Liuchiu Hsu island offshore SW Taiwan: tectonic versus diapiric anticline development and comparisons with onshore structures. C R Geoscience 336:815–825
Lee CT, Chen CT, Chi YM, Liao CW, Liao CF, Lin CC (2000) Engineering Investigation of Hsinhua Fault. National Central University 7 (in Chinese)
Majdański M (2012) The structure of the crust in TESZ area by Kriging interpolation. Acta Geophys 60:59–75
McCaffrey R (2002) Crustal block rotations and plate coupling, in Plate Boundary Zones. Geodyn Ser 30:101–122
Miura H (2010) A study of travel time prediction using universal Kriging. TOP 18:257–270
MOI (2012) Report of Taiwan Geodetic Datum 1997 at 2010.0. Ministry of the Interior (MOI), Republic of China (Taiwan). http://www.gps.moi.gov.tw/sscenter/. Accessed 5 April 2012 (in Chinese)
Mouthereau F, Deffontaines B, Lacome O, Angelier J (2002) Variation along the strike of the Taiwan thrust belt, basement control on structural style, wedge geometry and kinematics. Geol Soc Am 358:31–53
Murray-Moraleda J (2011) GPS: applications in crustal deformation monitoring. In: Meyers R (ed) Extreme environmental events, Springer, New York, pp 589–622
Okada Y (1985) Surface deformation due to shear and tensile faults in a half-space. Bull Seism Soc Am 75:1135–1154
Oliver MA, Webster R (1990) Kriging: a method of interpolation for geographical information systems. Int J Geogr Inf Sci 4:313–332
Pan YS (1968) Interpretation aid seismic coordination of the Bouguer gravity anomalies obtained in southern Taiwan. Petrol Geol Taiwan 6:197–207
Press WH, Flannery BP, Teukolsky SA, Vetterling WT (1989) Numerical recipes. U. K, Cambridge
Roush JJ, Lingle CS, Guritz RM, Fatland DR, Voronina VA (2003) Surge-front propagation and velocities during the early-1993–95 surge of Bering Glacier, Alaska, U.S.A., from sequential SAR imagery. Ann Glaciol 36:37–44
Samsonov S, Tiampo K (2006) Analytical optimization of a DInSAR and GPS dataset for derivation of three-dimensional surface motion. IEEE Geosci Remote Sens Lett 3:107–111
Samsonov S, Tiampo K, Rundle J, Li Z (2007) Application of DInSAR-GPS optimization for derivation of fine-scale surface motion maps of Southern California. IEEE Geosci Remote Sens Lett 45:512–521
Savage JC, Simpson RW (1997) Surface strain accumulation and the seismic moment tensor. Bull Seism Soc Am 87:1345–1353
Shih TT (1967) A survey of the active mud volcanoes in Taiwan and a study of their types and the character of the mud. Pet Geol Taiwan 6:259–311
Shyu JBH, Sieh K, Chen YG, Liu CS (2005) Neotectonic architecture of Taiwan and its implications for future large earthquakes. J Geophys Res 110, B08402. doi:10.1029/2004JB003251
Sun SC (1971) Photogeologic study of the Hsinying-Chiayi coastal plain, Taiwan. Petrol Geol Taiwan 8:65–75
Tanaka Y, Saita H, Sugawara J, Iwata K, Toyoda T, Hirai H, Kawaguchi T, Matsuzaka S (2007) Efficient maintenance of the Japanese geodetic datum 2000 using crustal deformation models—PatchJGD & semi-dynamic datum. Bull Geogr Surv Inst 54:49–59
Tregoning P, Jackson R (1999) The need for dynamic datums. Geomatics Research Australasia 71:87–102
van Dalfsen W, Doornenbal JC, Dortland S, Gunnink JL (2006) A comprehensive seismic velocity model for the Netherlands based on lithostratigraphic layers. Netherlands J Geosci 85–4:277–292
Wackernagel H (2003) Multivariate geostatistics: an introduction with applications. Springer.
Yu SB, Chen HY, Kuo LC (1997) Velocity field of GPS stations in the Taiwan area. Tectonophysics 274:41–59
We thank the Central Geological Survey for providing the campaign-mode GPS data. We thank the Central Weather Bureau, the Ministry of Interior, the National Land Surveying and Mapping Center, the Central Geological Survey, and the Institute of Earth Sciences, Academia Sinica for providing the continuous GPS data. Figures were generated using the Generic Mapping Tools (GMT), developed by Wessel and Smith (1991). This research was supported by Taiwan NSC grant NSC 101-2116-M-006-012-.
Department of Geomatics, National Cheng Kung University, Tainan, 70101, Taiwan
Kuo-En Ching
Department of Real Estate and Built Environment, National Taipei University, New Taipei City, 23741, Taiwan
Kwo-Hwa Chen
Correspondence to Kwo-Hwa Chen.
Both KEC and KHC initiated the study. KEC helped with the conceptual ideas, collected the data, and also helped write most of the article. KHC helped in the data calculated and analyses. Both authors contributed in interpreting the results and writing the paper as well. All authors read and approved the final manuscript.
Ching, KE., Chen, KH. Tectonic effect for establishing a semi-dynamic datum in Southwest Taiwan. Earth Planet Sp 67, 207 (2015). https://doi.org/10.1186/s40623-015-0374-0
Horizontal velocity model
Kriging interpolation
Block model
|
CommonCrawl
|
What is the value of $x$ in the equation $6^{x+1}-6^{x}=1080$?
Rewrite the left-hand side as $6^x(6^1-6^0)=6^x\cdot5$. Divide both sides by $5$ to find $6^x=\frac{1080}{5}=216$. Since $216=6^3$, $x=\boxed{3}$.
|
Math Dataset
|
\begin{document}
\noindent {\huge Spectral Clustering Based on Local PCA}
\renewcommand*{\thefootnote}{\fnsymbol{footnote}} \noindent {\large Ery Arias-Castro\footnote{Corresponding author: \url{math.ucsd.edu/~eariasca}}\renewcommand{\thefootnote}{\arabic{footnote}}\setcounter{footnote}{0}\footnote{University of California, San Diego}, Gilad Lerman \footnote{University of Minnesota, Twin Cities} and Teng Zhang \footnote{Institute for Mathematics and its Applications (IMA)} }
\noindent We propose a spectral clustering method based on local principal components analysis (PCA). After performing local PCA in selected neighborhoods, the algorithm builds a nearest neighbor graph weighted according to a discrepancy between the principal subspaces in the neighborhoods, and then applies spectral clustering. As opposed to standard spectral methods based solely on pairwise distances between points, our algorithm is able to resolve intersections. We establish theoretical guarantees for simpler variants within a prototypical mathematical framework for multi-manifold clustering, and evaluate our algorithm on various simulated data sets.
\noindent {\bf Keywords:} multi-manifold clustering, spectral clustering, local principal component analysis, intersecting clusters.
\section{Introduction} \label{sec:intro}
The task of multi-manifold clustering, where the data are assumed to be located near surfaces embedded in Euclidean space, is relevant in a variety of applications. In cosmology, it arises as the extraction of galaxy clusters in the form of filaments (curves) and walls (surfaces)~\citep{galaxy-nonrandom,MarSaa}; in motion segmentation, moving objects tracked along different views form affine or algebraic surfaces~\citep{Ma07,1530127,vidal2006unified,AtevKSCC}; this is also true in face recognition, in the context of images of faces in fixed pose under varying illumination conditions~\citep{Ho03,Basri03,Epstein95}.
We consider a stylized setting where the underlying surfaces are nonparametric in nature, with a particular emphasis on situations where the surfaces intersect. Specifically, we assume the surfaces are smooth, for otherwise the notion of continuation is potentially ill-posed. For example, without smoothness assumptions, an L-shaped cluster is indistinguishable from the union of two line-segments meeting at right angle.
Spectral methods \citep{1288832} are particularly suited for nonparametric settings, where the underlying clusters are usually far from convex, making standard methods like K-means irrelevant. However, a drawback of standard spectral approaches such as the well-known variation of \citet*{Ng02} is their inability to separate intersecting clusters. Indeed, consider the simplest situation where two straight clusters intersect at right angle, pictured in \figref{segments} below.
\begin{figure}
\caption{Two rectangular clusters intersecting at right angle. Left: the original data. Center: a typical output of the standard spectral clustering method of \citet{Ng02}, which is generally unable to resolve intersections. Right: our method.}
\label{fig:segments}
\end{figure}
The algorithm of \citet{Ng02} is based on pairwise affinities that are decreasing in the distances between data points, making it insensitive to smoothness and, therefore, intersections. And indeed, this algorithm typically fails to separate intersecting clusters, even in the easiest setting of \figref{segments}.
As argued in \citep{Agarwal05,Agarwal06,Shashua06}, a multiway affinity is needed to capture complex structure in data (here, smoothness) beyond proximity attributes. For example, \citet{spectral_applied} use a flatness affinity in the context of {\em hybrid linear modeling}, where the surfaces are assumed to be affine subspaces, and subsequently extended to algebraic surfaces via the `kernel trick' \citep*{AtevKSCC}. Moving beyond parametric models, \citet*{higher-order} consider a localized measure of flatness; see also \citet{NIPS2011_0065}. Continuing this line of work, we suggest a spectral clustering method based on the estimation of the local linear structure (tangent bundle) via local principal component analysis (PCA).
The idea of using local PCA combined with spectral clustering has precedents in the literature. In particular, our method is inspired by the work of \citet*{goldberg2009multi}, where the authors develop a spectral clustering method within a semi-supervised learning framework. Local PCA is also used in the multiscale, spectral-flavored algorithm of \citet*{kushnir}. This approach is in the zeitgeist. While writing this paper, we became aware of two very recent publications, by \citet*{wang2011spectral} and by \citet*{Gong2012}, both proposing approaches very similar to ours. We comment on these spectral methods in more detail later on.
The basic proposition of local PCA combined with spectral clustering has two main stages. The first one forms an affinity between a pair of data points that takes into account both their Euclidean distance and a measure of discrepancy between their tangent spaces. Each tangent space is estimated by PCA in a local neighborhood around each point. The second stage applies standard spectral clustering with this affinity. As a reality check, this relatively simple algorithm succeeds at separating the straight clusters in \figref{segments}. We tested our algorithm in more elaborate settings, some of them described in \secref{numerics}.
Besides spectral-type approaches to multi-manifold clustering, other methods appear in the literature. The methods we know of either assume that the different surfaces do not intersect \citep{polito2001grouping}, or that the intersecting surfaces have different intrinsic dimension or density \citep{gionis,Haro06}. The few exceptions tend to propose very complex methods that promise to be challenging to analyze \citep{souvenir,energy}.
Our contribution is the design and detailed study of a prototypical spectral clustering algorithm based on local PCA, tailored to settings where the underlying clusters come from sampling in the vicinity of smooth surfaces that may intersect. We endeavored to simplify the algorithm as much as possible without sacrificing performance. We provide theoretical results for simpler variants within a standard mathematical framework for multi-manifold clustering. To our knowledge, these are the first mathematically backed successes at the task of resolving intersections in the context of multi-manifold clustering, with the exception of \citep{higher-order}, where the corresponding algorithm is shown to succeed at identifying intersecting curves. The salient features of that algorithm are illustrated via numerical experiments.
The rest of the paper is organized as follows. In \secref{algo}, we introduce our methods. In \secref{math}, we analyze our methods in a standard mathematical framework for multi-manifold learning. In \secref{numerics}, we perform some numerical experiments illustrating several features of our algorithm. In \secref{discussion}, we discuss possible extensions.
\section{The methodology} \label{sec:algo}
We introduce our algorithm and simpler variants that are later analyzed in a mathematical framework. We start with some review of the literature, zooming in on the most closely related publications.
\subsection{Some precedents} Using local PCA within a spectral clustering algorithm was implemented in four other publications we know of \citep{goldberg2009multi,kushnir,Gong2012,wang2011spectral}. As a first stage in their semi-supervised learning method, \cite*{goldberg2009multi} design a spectral clustering algorithm. The method starts by subsampling the data points, obtaining `centers' in the following way. Draw ${\boldsymbol y}_1$ at random from the data and remove its $\ell$-nearest neighbors from the data. Then repeat with the remaining data, obtaining centers ${\boldsymbol y}_1, {\boldsymbol y}_2, \dots$. Let ${\boldsymbol C}_i$ denote the sample covariance in the neighborhood of ${\boldsymbol y}_i$ made of its $\ell$-nearest neighbors. An $m$-nearest-neighbor graph is then defined on the centers in terms of the Mahalanobis distances. Explicitly, the centers ${\boldsymbol y}_i$ and ${\boldsymbol y}_j$ are connected in the graph if ${\boldsymbol y}_j$ is among the $m$ nearest neighbors of ${\boldsymbol y}_i$ in Mahalanobis distance \begin{equation} \label{maha}
\|{\boldsymbol C}_{i}^{-1/2}({\boldsymbol y}_i - {\boldsymbol y}_j)\|, \end{equation} or vice-versa. The parameters $\ell$ and $m$ are both chosen of order $\log n$. An existing edge between ${\boldsymbol y}_i$ and ${\boldsymbol y}_j$ is then weighted by $\exp(-H_{ij}^2/\eta^2)$, where $H_{ij}$ denotes the Hellinger distance between the probability distributions $\mathcal{N}({\bf 0}, {\boldsymbol C}_i)$ and $\mathcal{N}({\bf 0}, {\boldsymbol C}_j)$. The spectral graph partitioning algorithm of \citet*{Ng02} --- detailed in \algref{njw} --- is then applied to the resulting affinity matrix, with some form of constrained K-means.
We note that \cite{goldberg2009multi} evaluate their method in the context of semi-supervised learning where the clustering routine is only required to return subclusters of actual clusters. In particular, the data points other than the centers are discarded. Note also that their evaluation is empirical.
\begin{algorithm}[htb] \caption{\quad Spectral Graph Partitioning \citep*{Ng02}} \label{alg:njw}
{\bf Input:} \\ \hspace*{.1in} Affinity matrix ${\boldsymbol W} = (W_{ij})$, size of the partition $K$ \\[.1in]
{\bf Steps:}\\
\hspace*{.1in} {\bf 1:} Compute ${\boldsymbol Z} = (Z_{ij})$ according to $Z_{ij} = W_{ij}/\sqrt{D_i D_j},$ with $D_i = \sum_{j=1}^n W_{ij}$. \\ \hspace*{.1in} {\bf 2:} Extract the top $K$ eigenvectors of ${\boldsymbol Z}$. \\ \hspace*{.1in} {\bf 3:} Renormalize each row of the resulting $n \times K$ matrix. \\ \hspace*{.1in} {\bf 4:} Apply $K$-means to the row vectors. \\[-.05in]
\end{algorithm}
The algorithm proposed by \cite*{kushnir} is multiscale and works by coarsening the neighborhood graph and computing sampling density and geometric information inferred along the way such as obtained via PCA in local neighborhoods. This bottom-up flow is then followed by a top-down pass, and the two are iterated a few times. The algorithm is too complex to be described in detail here, and probably too complex to be analyzed mathematically. The clustering methods of \cite{goldberg2009multi} and ours can be seen as simpler variants that only go bottom up and coarsen the graph only once.
In the last stages of writing this paper, we learned of the works of \cite*{wang2011spectral} and \cite*{Gong2012}, who propose algorithms very similar to our \algref{proj} detailed below. Note that these publications do not provide any theoretical guarantees for their methods, which is one of our main contributions here.
\subsection{Our algorithms} We now describe our method and propose several variants. Our setting is standard: we observe data points ${\boldsymbol x}_1, \dots, {\boldsymbol x}_n \in \mathbb{R}^D$ that we assume were sampled in the vicinity of $K$ smooth surfaces embedded in $\mathbb{R}^D$. The setting is formalized later in \secref{setting}.
\subsubsection{Connected component extraction: comparing local covariances} \label{sec:cov}
We start with our simplest variant, which is also the most natural. The method depends on a neighborhood radius $r > 0$, a spatial scale parameter $\varepsilon > 0$ and a covariance (relative) scale $\eta > 0$. For a vector ${\boldsymbol x}$, $\|{\boldsymbol x}\|$ denotes its Euclidean norm, and for a (square) matrix ${\boldsymbol A}$, $\|{\boldsymbol A}\|$ denotes its spectral norm. For $n \in \mathbb{N}$, we denote by $[n]$ the set $\{1,\ldots,n\}$. Given a data set ${\boldsymbol x}_1, \dots, {\boldsymbol x}_n$, for any point ${\boldsymbol x} \in \mathbb{R}^D$ and $r > 0$, define the neighborhood \begin{equation} \label{Nr}
N_r({\boldsymbol x}) = \{{\boldsymbol x}_j : \|{\boldsymbol x} - {\boldsymbol x}_j\| \le r\}. \end{equation}
\begin{algorithm}[htb] \caption{\quad Connected Component Extraction: Comparing Covariances} \label{alg:cov}
{\bf Input:} \\ \hspace*{.1in} Data points ${\boldsymbol x}_1, \dots, {\boldsymbol x}_n$; neighborhood radius $r > 0$; spatial scale $\varepsilon > 0$, covariance scale $\eta > 0$. \\[.1in] {\bf Steps:}\\
\hspace*{.1in} {\bf 1:} For each $i \in [n]$, compute the sample covariance matrix ${\boldsymbol C}_i$ of $N_r({\boldsymbol x}_i)$. \\ \hspace*{.1in} {\bf 2:} Compute the following affinities between data points: \begin{equation} \label{cov-aff}
W_{ij} = {\rm 1}\kern-0.24em{\rm I}_{\{\|{\boldsymbol x}_i - {\boldsymbol x}_j\| \le \varepsilon\}} \cdot {\rm 1}\kern-0.24em{\rm I}_{\{\|{\boldsymbol C}_i - {\boldsymbol C}_j\| \le \eta r^2\}}. \end{equation}
\hspace*{.1in} {\bf 3:} Remove ${\boldsymbol x}_i$ when there is ${\boldsymbol x}_j$ such that $\|{\boldsymbol x}_{j} - {\boldsymbol x}_i\| \le r$ and $\|{\boldsymbol C}_{j} - {\boldsymbol C}_{i}\| > \eta r^2$. \\ \hspace*{.1in} {\bf 4:} Extract the connected components of the resulting graph. \\ \hspace*{.1in} {\bf 5:} Points removed in Step~3 are grouped with the closest point that survived Step~3. \\[-.05in] \end{algorithm}
In summary, the algorithm first creates an unweighted graph: the nodes of this graph are the data points and edges are formed between two nodes if both the distance between these nodes and the distance between the local covariance structures at these nodes are sufficiently small. After removing the points near the intersection at Step~3, the algorithm then extracts the connected components of the graph.
In principle, the neighborhood size $r$ is chosen just large enough that performing PCA in each neighborhood yields a reliable estimate of the local covariance structure. For this, the number of points inside the neighborhood needs to be large enough, which depends on the sample size $n$, the sampling density, intrinsic dimension of the surfaces and their surface area (Hausdorff measure), how far the points are from the surfaces (i.e., noise level), and the regularity of the surfaces.
The spatial scale parameter $\varepsilon$ depends on the sampling density and $r$. It needs to be large enough that a point has plenty of points within distance $\varepsilon$, including some across an intersection, so each cluster is strongly connected. At the same time, $\varepsilon$ needs to be small enough that a local linear approximation to the surfaces is a relevant feature of proximity. Its choice is rather similar to the choice of the scale parameter in standard spectral clustering \citep{Ng02,Zelnik-Manor04}.
The orientation scale $\eta$ needs to be large enough that centers from the same cluster and within distance $\varepsilon$ of each other have local covariance matrices within distance $\eta r^2$, but small enough that points from different clusters near their intersection have local covariance matrices separated by a distance substantially larger than $\eta r^2$. This depends on the curvature of the surfaces and the incidence angle at the intersection of two (or more) surfaces. Note that a typical covariance matrix over a ball of radius $r$ has norm of order $r^2$, which justifies using our choice of parametrization.
In the mathematical framework we introduce later on, these parameters can be chosen automatically as done in \citep{higher-order}, at least when the points are sampled exactly on the surfaces. We will not elaborate on that since in practice this does not inform our choice of parameters.
The rationale behind Step~3 is as follows. As we just discussed, the parameters need to be tuned so that points from the same cluster and within distance $\varepsilon$ have local covariance matrices within distance $\eta r^2$. Hence, ${\boldsymbol x}_i$ and ${\boldsymbol x}_j$ in Step~3 are necessarily from different clusters. Since they are near each other, in our model this will imply that they are close to an intersection. Therefore, roughly speaking, Step~3 removes points near an intersection.
Although this method works in simple situations like that of two intersecting segments (\figref{segments}), it is not meant to be practical. Indeed, extracting connected components is known to be sensitive to spurious points and therefore unstable. Furthermore, we found that comparing local covariance matrices as in affinity \eqref{cov-aff} tends to be less stable than comparing local projections as in affinity \eqref{proj-aff}, which brings us to our next variant.
\subsubsection{Connected component extraction: comparing local projections} \label{sec:proj}
We present another variant also based on extracting the connected components of a neighborhood graph that compares orthogonal projections onto the largest principal directions.
\begin{algorithm}[htb] \caption{\quad Connected Component Extraction: Comparing Projections} \label{alg:proj}
{\bf Input:} \\ \hspace*{.1in} Data points ${\boldsymbol x}_1, \dots, {\boldsymbol x}_n$; neighborhood radius $r > 0$, spatial scale $\varepsilon > 0$, projection scale $\eta > 0$. \\[.1in] {\bf Steps:}\\ \hspace*{.1in} {\bf 1:} For each $i \in [n]$, compute the sample covariance matrix ${\boldsymbol C}_i$ of $N_r({\boldsymbol x}_i)$. \\
\hspace*{.1in} {\bf 2:} Compute the projection ${\boldsymbol Q}_i$ onto the eigenvectors of ${\boldsymbol C}_i$ with eigenvalue exceeding $\sqrt{\eta} \, \|{\boldsymbol C}_i\|$. \\ \hspace*{.1in} {\bf 3:} Compute the following affinities between data points: \begin{equation} \label{proj-aff}
W_{ij} = {\rm 1}\kern-0.24em{\rm I}_{\{\|{\boldsymbol x}_i - {\boldsymbol x}_j\| \le \varepsilon\}} \cdot {\rm 1}\kern-0.24em{\rm I}_{\{\|{\boldsymbol Q}_i - {\boldsymbol Q}_j\| \le \eta\}}. \end{equation}
\hspace*{.1in} {\bf 4:} Extract the connected components of the resulting graph. \\
\end{algorithm}
We note that the local intrinsic dimension is determined by thresholding the eigenvalues of the local covariance matrix, keeping the directions with eigenvalues within some range of the largest eigenvalue. The same strategy is used by \cite{kushnir}, but with a different threshold. The method is a hard version of what we implemented, which we describe next.
\subsubsection{Covariances or projections?} \label{sec:compare}
In our numerical experiments, we tried working both directly with covariance matrices as in \eqref{cov-aff} and with projections as in \eqref{proj-aff}. Note that in our experiments we used spectral graph partitioning with soft versions of these affinities, as described in \secref{stylized}. We found working with projections to be more reliable. The problem comes, in part, from boundaries. When a surface has a boundary, local covariances over neighborhoods that overlap with the boundary are quite different from local covariances over nearby neighborhoods that do not touch the boundary.
Consider the example of two segments, $S_1$ and $S_2$, intersecting at an angle of $\theta \in (0, \pi/2)$ at their middle point, specifically \[ S_1 = [-1,1] \times \{0\}, \qquad S_2 = \{(x, x \tan \theta): x \in [-\cos \theta,\cos \theta]\}. \] Assume there is no noise and that the sampling is uniform. Assume $r \in (0, \frac12 \sin \theta)$ so that the disc centered at ${\boldsymbol x}_1 := (1/2,0)$ does not intersect $S_2$, and the disc centered at ${\boldsymbol x}_2 := (\frac12 \cos \theta, \frac12 \tan \theta)$ does not intersect $S_1$. Let ${\boldsymbol x}_0 = (1,0)$. For ${\boldsymbol x} \in S_1 \cup S_2$, let ${\boldsymbol C}_{\boldsymbol x}$ denote the local covariance at ${\boldsymbol x}$ over a ball of radius $r$. Simple calculations yield: \[ {\boldsymbol C}_{(1,0)} = \frac{r^2}{12} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \quad {\boldsymbol C}_{{\boldsymbol x}_1} = \frac{r^2}{3} \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \quad
{\boldsymbol C}_{{\boldsymbol x}_2} = \frac{r^2}{3} \begin{pmatrix} \cos^2 \theta & \sin(\theta) \cos(\theta) \\ \sin(\theta) \cos(\theta) & \sin^2 \theta \end{pmatrix}, \] and therefore \[
\|{\boldsymbol C}_{{\boldsymbol x}_0} - {\boldsymbol C}_{{\boldsymbol x}_1}\| = \frac{r^2}{4}, \quad \|{\boldsymbol C}_{{\boldsymbol x}_1} - {\boldsymbol C}_{{\boldsymbol x}_2}\| = \frac{\sqrt{2} r^2}3 \sin \theta. \] When $\sin \theta \le \frac3{4\sqrt{2}}$ (roughly, $\theta \le 32^o$), the difference in Frobenius norm between the local covariances at ${\boldsymbol x}_0, {\boldsymbol x}_1 \in S_1$ is larger than that at ${\boldsymbol x}_1 \in S_1$ and ${\boldsymbol x}_2 \in S_2$. As for projections, however, \[ {\boldsymbol Q}_{{\boldsymbol x}_0} = {\boldsymbol Q}_{{\boldsymbol x}_1} = \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \quad {\boldsymbol Q}_{{\boldsymbol x}_2} = \begin{pmatrix} \cos^2 \theta & \sin(\theta) \cos(\theta) \\ \sin(\theta) \cos(\theta) & \sin^2 \theta \end{pmatrix}, \] so that \[
\|{\boldsymbol Q}_{{\boldsymbol x}_0} - {\boldsymbol Q}_{{\boldsymbol x}_1}\| = 0, \quad \|{\boldsymbol Q}_{{\boldsymbol x}_1} - {\boldsymbol Q}_{{\boldsymbol x}_2}\| = \sqrt{2} \sin \theta. \]
While in theory the boundary points account for a small portion of the sample, in practice this is not the case and we find that spectral graph partitioning is challenged by having points near the boundary that are far (in affinity) from nearby points from the same cluster. This may explain why the (soft version of) affinity \eqref{proj-aff} yields better results than the (soft version of) affinity \eqref{cov-aff} in our experiments.
\subsubsection{Spectral Clustering Based on Local PCA} \label{sec:stylized}
The following variant is more robust in practice and is the algorithm we actually implemented. The method assumes that the surfaces are of same dimension $d$ and that they are $K$ surfaces, with both parameters $K$ and $d$ known.
\begin{algorithm}[htb] \caption{\quad Spectral Clustering Based on Local PCA} \label{alg:spectral}
{\bf Input:} \\ \hspace*{.1in} Data points ${\boldsymbol x}_1, \dots, {\boldsymbol x}_n$; neighborhood radius $r > 0$; spatial scale $\varepsilon > 0$, projection scale $\eta > 0$; intrinsic dimension $d$; number of clusters $K$. \\[.1in] {\bf Steps:}\\ \hspace*{.1in} {\bf 0:} Pick one point ${\boldsymbol y}_1$ at random from the data. Pick another point ${\boldsymbol y}_2$ among the data points not included in $N_r({\boldsymbol y}_1)$, and repeat the process, selecting centers ${\boldsymbol y}_1, \dots, {\boldsymbol y}_{n_0}$.\\ \hspace*{.1in} {\bf 1:} For each $i = 1, \dots, n_0$, compute the sample covariance matrix ${\boldsymbol C}_i$ of $N_r({\boldsymbol y}_i)$. Let ${\boldsymbol Q}_i$ denote the orthogonal projection onto the space spanned by the top $d$ eigenvectors of ${\boldsymbol C}_i$.\\ \hspace*{.1in} {\bf 2:} Compute the following affinities between center pairs: \begin{equation} \label{aff}
W_{ij} = \exp\left(-\frac{\|{\boldsymbol y}_i - {\boldsymbol y}_j\|^2}{\varepsilon^2}\right) \cdot \exp\left(-\frac{\|{\boldsymbol Q}_i - {\boldsymbol Q}_j\|^2}{\eta^2} \right). \end{equation}
\hspace*{.1in} {\bf 3:} Apply spectral graph partitioning (\algref{njw}) to ${\boldsymbol W}$.\\ \hspace*{.1in} {\bf 4:} The data points are clustered according to the closest center in Euclidean distance.\\[-.05in] \end{algorithm}
We note that ${\boldsymbol y}_1, \dots, {\boldsymbol y}_{n_0}$ forms an $r$-packing of the data. The underlying rationale for this coarsening is justified in \citep{goldberg2009multi} by the fact that the covariance matrices, and also the top principal directions, change smoothly with the location of the neighborhood, so that without subsampling these characteristics would not help detect the abrupt event of an intersection. The affinity \eqref{aff} is of course a soft version of \eqref{proj-aff}.
\subsubsection{Comparison with closely related methods} We highlight some differences with the other proposals in the literature. We first compare our approach to that of \cite{goldberg2009multi}, which was our main inspiration. \begin{itemize} \item {\em Neighborhoods.} Comparing with \cite{goldberg2009multi}, we define neighborhoods over $r$-balls instead of $\ell$-nearest neighbors, and connect points over $\varepsilon$-balls instead of $m$-nearest neighbors. This choice is for convenience, as these ways are in fact essentially equivalent when the sampling density is fairly uniform. This is elaborated at length in \citep{1519716,brito,5714241}. \item {\em Mahalanobis distances.}
\cite{goldberg2009multi} use Mahalanobis distances \eqref{maha} between centers. In our version, we could for example replace the Euclidean distance $\|{\boldsymbol x}_i -{\boldsymbol x}_j\|$ in the affinity \eqref{cov-aff} with the average Mahalanobis distance \begin{equation} \label{maha2}
\|{\boldsymbol C}_{i}^{-1/2}({\boldsymbol x}_i - {\boldsymbol x}_j)\| + \|{\boldsymbol C}_{j}^{-1/2}({\boldsymbol x}_j - {\boldsymbol x}_i)\|. \end{equation} We actually tried this and found that the algorithm was less stable, particularly under low noise. Introducing a regularization in this distance --- which requires the introduction of another parameter --- solves this problem partially.
That said, using Mahalanobis distances makes the procedure less sensitive to the choice of $\varepsilon$, in that neighborhoods may include points from different clusters. Think of two parallel line segments separated by a distance of $\delta$, and assume there is no noise, so the points are sampled exactly from these segments. Assuming an infinite sample size, the local covariance is the same everywhere so that points within distance $\varepsilon$ are connected by the affinity \eqref{cov-aff}. Hence, \algref{cov} requires that $\varepsilon < \delta$. In terms of Mahalanobis distances, points on different segments are infinitely separated, so a version based on these distances would work with any $\varepsilon > 0$. In the case of curved surfaces and/or noise, the situation is similar, though not as evident. Even then, the gain in performance guarantees is not obvious, since we only require that $\varepsilon$ be slightly larger in order of magnitude that $r$.
\item {\em Hellinger distances.} As we mentioned earlier, \cite{goldberg2009multi} use Hellinger distances of the probability distributions $\mathcal{N}({\bf 0}, {\boldsymbol C}_i)$ and $\mathcal{N}({\bf 0}, {\boldsymbol C}_j)$ to compare covariance matrices, specifically \begin{equation} \label{hellinger} \left(1 - 2^{D/2} \frac{\det({\boldsymbol C}_i {\boldsymbol C}_j)^{1/4}}{\det({\boldsymbol C}_i + {\boldsymbol C}_j)^{1/2}} \right)^{1/2}, \end{equation} if ${\boldsymbol C}_i$ and ${\boldsymbol C}_j$ are full-rank. While using these distances or the Frobenius distances makes little difference in practice, we find it easier to work with the latter when it comes to proving theoretical guarantees. Moreover, it seems more natural to assume a uniform sampling distribution in each neighborhood rather than a normal distribution, so that using the more sophisticated similarity \eqref{hellinger} does not seem justified.
\item {\em K-means.} We use K-means++ for a good initialization. However, we found that the more sophisticated size-constrained K-means \citep{constrained-k-means} used in \citep{goldberg2009multi} did not improve the clustering results.
\end{itemize}
As we mentioned above, our work was developed in parallel to that of \cite{wang2011spectral} and \cite{Gong2012}. We highlight some differences. They do not subsample, but estimate the local tangent space at each data point ${\boldsymbol x}_i$. \cite{wang2011spectral} fit a mixture of $d$-dimensional affine subspaces to the data using MPPCA \citep{tipping1999mixtures}, which is then used to estimate the tangent subspaces at each data point. \cite{Gong2012} develop some sort of robust local PCA. While \cite{wang2011spectral} assume all surfaces are of same dimension known to the user, \cite{Gong2012} estimate that locally by looking at the largest gap in the spectrum of estimated local covariance matrix. This is similar in spirit to what is done in Step~2 of \algref{proj}, but we did not include this step in \algref{spectral} because we did not find it reliable in practice. We also tried estimating the local dimensionality using the method of \cite{little2009multiscale}, but this failed in the most complex cases.
\cite{wang2011spectral} use a nearest-neighbor graph and their affinity is defined as \begin{equation} \label{wang} W_{ij} = \Delta_{ij} \cdot \left(\prod_{s = 1}^d \cos \theta_s(i,j)\right)^\alpha, \end{equation} where $\Delta_{ij} = 1$ if ${\boldsymbol x}_i$ is among the $\ell$-nearest neighbors of ${\boldsymbol x}_j$, or vice versa, while $\Delta_{ij} = 0$ otherwise; $\theta_1(i,j) \ge \cdots \ge \theta_d(i,j)$ are the principal (aka, canonical) angles \citep{MR1061154} between the estimated tangent subspaces at ${\boldsymbol x}_i$ and ${\boldsymbol x}_j$. $\ell$ and $\alpha$ are parameters of the method. \cite{Gong2012} define an affinity that incorporates the self-tuning method of \cite{Zelnik-Manor04}; in our notation, their affinity is \begin{equation} \label{gong}
\exp\left(-\frac{\|{\boldsymbol x}_i - {\boldsymbol x}_j\|^2}{\varepsilon_i \varepsilon_j}\right) \cdot \exp\left(-\frac{\asin^2 \|{\boldsymbol Q}_i - {\boldsymbol Q}_j\|}{\eta^2 \|{\boldsymbol x}_i - {\boldsymbol x}_j\|^2/(\varepsilon_i \varepsilon_j)} \right). \end{equation} where $\varepsilon_i$ is the distance from ${\boldsymbol x}_i$ to its $\ell$-nearest neighbor. $\ell$ is a parameter.
Although we do not analyze their respective ways of estimating the tangent subspaces, our analysis provides essential insights into their methods, and for that matter, any other method built on spectral clustering based on tangent subspace comparisons.
\section{Mathematical Analysis} \label{sec:math}
While the analysis of \algref{spectral} seems within reach, there are some complications due to the fact that points near the intersection may form a cluster of their own --- we were not able to discard this possibility. Instead, we study the simpler variants described in \algref{cov} and \algref{proj}. Even then, the arguments are rather complex and interestingly involved. The theoretical guarantees that we thus obtain for these variants are stated in \thmref{main} and proved in \secref{proofs}. We comment on the analysis of \algref{spectral} right after that. We note that there are very few theoretical results on resolving intersecting clusters. In fact, we are only aware of \citep{spectral_theory} in the context of affine surfaces, \citep{Soltanolkotabi_Candes2011} in the context of affine surfaces without noise and \citep{higher-order} in the context of curves.
The generative model we assume is a natural mathematical framework for multi-manifold learning where points are sampled in the vicinity of smooth surfaces embedded in Euclidean space. For concreteness and ease of exposition, we focus on the situation where two surfaces (i.e., $K = 2$) of same dimension $1 \le d \le D$ intersect. This special situation already contains all the geometric intricacies of separating intersecting clusters. On the one hand, clusters of different intrinsic dimension may be separated with an accurate estimation of the local intrinsic dimension without further geometry involved \citep{Haro06}. On the other hand, more complex intersections (3-way and higher) complicate the situation without offering truly new challenges. For simplicity of exposition, we assume that the surfaces are submanifolds without boundary, though it will be clear from the analysis (and the experiments) that the method can handle surfaces with (smooth) boundaries that may self-intersect. We discuss other possible extensions in \secref{discussion}.
Within that framework, we show that \algref{cov} and \algref{proj} are able to identify the clusters accurately except for points near the intersection. Specifically, with high probability with respect to the sampling distribution, \algref{cov} divides the data points into two groups such that, except for points within distance $C \varepsilon$ of the intersection, all points from the first cluster are in one group and all points from the second cluster are in the other group. The constant $C$ depends on the surfaces, including their curvatures, separation between them and intersection angle. The situation for \algref{proj} is more complex, as it may return more than two clusters, but the main feature is that most of two clusters (again, away from the intersection) are in separate connected components.
\subsection{Generative model} \label{sec:setting}
Each surface we consider is a connected, $C^2$ and compact submanifold without boundary and of dimension $d$ embedded in $\mathbb{R}^D$. Any such surface has a positive reach, which is what we use to quantify smoothness. The notion of reach was introduced by \cite{MR0110078}. Intuitively, a surface has reach exceeding $r$ if, and only if, one can roll a ball of radius $r$ on the surface without obstruction~\citep{walther}.
Formally, for ${\boldsymbol x} \in \mathbb{R}^D$ and $S \subset \mathbb{R}^D$, let \[
\dist({\boldsymbol x}, S) = \inf_{{\boldsymbol s} \in S} \|{\boldsymbol x} - {\boldsymbol s}\|, \] and \[ B(S, r) = \{{\boldsymbol x} : \dist({\boldsymbol x}, S) < r\}, \] which is often called the $r$-tubular neighborhood (or $r$-neighborhood) of $S$.
The reach of $S$ is the supremum over $r> 0$ such that, for each ${\boldsymbol x} \in B(S, r)$, there is a unique point in $S$ nearest~${\boldsymbol x}$. It is well-known that, for $C^2$ submanifolds, the reach bounds the radius of curvature from below~\citep[Lem.~4.17]{MR0110078}.
For submanifolds without boundaries, the reach coincides with the condition number introduced in~\citep{1349695}.
When two surfaces $S_1$ and $S_2$ intersect, meaning $S_1 \cap S_2 \neq \emptyset$, we define their incidence angle as \begin{equation} \label{theta} \theta(S_1,S_2) := \inf \left( \theta_{\rm min}(T_{S_1}({\boldsymbol s}), T_{S_2}({\boldsymbol s})) : {\boldsymbol s} \in S_1 \cap S_2\right), \end{equation} where $T_S({\boldsymbol s})$ denote the tangent subspace of submanifold $S$ at point ${\boldsymbol s} \in S$, and $\theta_{\rm min}(T_1, T_2)$ is the smallest {\em nonzero} principal (aka, canonical) angle between subspaces $T_1$ and $T_2$~\citep{MR1061154}.
The clusters are generated as follows. Each data point ${\boldsymbol x}_i$ is drawn according to \begin{equation} \label{data-point} {\boldsymbol x}_i = {\boldsymbol s}_i + {\boldsymbol z}_i,
\end{equation} where ${\boldsymbol s}_i$ is drawn from the uniform distribution over $S_1 \cup S_2$ and ${\boldsymbol z}_i$ is an additive noise term satisfying $\|{\boldsymbol z}_i\| \leq \tau$ --- thus $\tau$ represents the noise or jitter level, and $\tau = 0$ means that the points are sampled on the surfaces. We assume the points are sampled independently of each other. We let \begin{equation} \label{I} I_k = \{i: {\boldsymbol s}_i \in S_k\}, \end{equation} and the goal is to recover the groups $I_1$ and $I_2$, up to some errors.
\subsection{Performance guarantees} \label{sec:clustering}
We state some performance guarantees for \algref{cov} and \algref{proj}.
\begin{thm} \label{thm:main} Consider two connected, compact, twice continuously differentiable submanifolds without boundary, of same dimension, intersecting at a strictly positive angle, with the intersection set having strictly positive reach. Assume the parameters are set so that \begin{equation} \label{main} \tau \le r \eta/C, \quad r \le \varepsilon/C, \quad \varepsilon \le \eta/C, \quad \eta \le 1/C, \end{equation} and $C > 0$ is large enough. Then with probability at least $1 - C n \exp\big[- n r^d \eta^2/C\big]$: \begin{itemize} \item \algref{cov} returns exactly two groups such that two points from different clusters are not grouped together unless one of them is within distance $C r$ from the intersection. \item \algref{proj} returns at least two groups, and such that two points from different clusters are not grouped together unless one of them is within distance $C r$ from the intersection. \end{itemize} \end{thm}
We note that the constant $C>0$ depends on what configuration the surfaces are in, in particular their reach and intersection angle, but also aspects that are harder to quantify, like their separation away from their intersection.
We now comment on the challenge of proving a similar result for \algref{spectral}. This algorithm relies on knowledge of the intrinsic dimension of the surfaces $d$ and the number of clusters (here $K=2$), but these may be estimated as in \citep{higher-order}, at least in theory, so we assume these parameters are known. The subsampling done in Step~0 does not pose any problem whatsoever, since the centers are well-spread when the points themselves are. The difficulty resides in the application of the spectral graph partitioning, \algref{njw}. If we were to include the intersection-removal step (Step~3 of \algref{cov}) before applying spectral graph partitioning, then a simple adaptation of arguments in \citep{5714241} would suffice. The real difficulty, and potential pitfall of the method in this framework (without the intersection-removal step), is that the points near the intersection may form their own cluster. For example, in the simplest case of two affine surfaces intersecting at a positive angle and no sampling noise, the projection matrix at a point near the intersection --- meaning a point whose $r$-ball contains a substantial piece of both surfaces --- would be the projection matrix onto $S_1 + S_2$ seen as a linear subspace. We were not able to discard this possibility, although we do not observe this happening in practice. A possible remedy is to constrain the K-means part to only return large-enough clusters. However, a proper analysis of this would require a substantial amount of additional work and we did not engage seriously in this pursuit.
\section{Numerical Experiments} \label{sec:numerics}
We tried our code\footnote{The code is available online at \url{http://www.ima.umn.edu/~zhang620/}.} on a few artificial examples. Very few algorithms were designed to work in the general situation we consider here and we did not compare our method with any other. As we argued earlier, the methods of \cite{wang2011spectral} and \cite{Gong2012} are quite similar to ours, and we encourage the reader to also look at the numerical experiments they performed. Our numerical experiments should be regarded as a proof of concept, only here to show that our method can be implemented and works on some toy examples.
In all experiments, the number of clusters $K$ and the dimension of the manifolds $d$ are assumed known.
We choose spatial scale $\varepsilon$ and the projection scale $\eta$ automatically as follows: we let
\begin{equation}\varepsilon=\max_{1\leq i\leq n_0}\min_{j\neq i} \|{\boldsymbol y}_i -{\boldsymbol y}_j \|,\label{eq:eps}\end{equation} and
\begin{equation}\eta=\operatorname*{median}_{(i,j): \|{\boldsymbol y}_i -{\boldsymbol y}_j \|<\varepsilon}\|{\boldsymbol P}_i-{\boldsymbol P}_j\|.\label{eq:eta}\end{equation} Here, we implicitly assume that the union of all the underlying surfaces forms a connected set. In that case, the idea behind choosing $\varepsilon$ as in \eqref{eq:eps} is that we want the $\varepsilon$-graph on the centers ${\boldsymbol y}_1, \dots, {\boldsymbol y}_n$ to be connected. Then $\eta$ is chosen so that a center ${\boldsymbol y}_i$ remains connected in the $(\varepsilon,\eta)$-graph to most of its neighbors in the $\varepsilon$-graph.
The neighborhood radius $r$ is chosen by hand for each situation. Although we do not know how to choose $r$ automatically, there are some general ad hoc guidelines. When $r$ is too large, the local linear approximation to the underlying surfaces may not hold in neighborhoods of radius $r$, resulting in local PCA becoming inappropriate. When $r$ is too small, there might not be enough points in a neighborhood of radius $r$ to accurately estimate the local tangent subspace to a given surface at that location, resulting in local PCA becoming inaccurate. From a computational point of view, the smaller $r$, the larger the number of neighborhoods and the heavier the computations, particularly at the level of spectral graph partitioning. In our numerical experiments, we find that our algorithm is more sensitive to the choice of $r$ when the clustering problem is more difficult. We note that automatic choice of tuning parameters remains a challenge in clustering, and machine learning at large, especially when no labels are available whatsoever. See \citep{Zelnik-Manor04,LBF_journal10,little2009multiscale,Kaslovsky2011}.
Since the algorithm is randomized (see Step 0 in Algorithm~\ref{alg:spectral}) we repeat each simulation $100$ times and report the median misclustering rate and number of times where the misclustering rate is smaller than $5\%$, $10\%$, and $15\%$.
We first run Algorithm~\ref{alg:spectral} on several artificial data sets, which are demonstrated in the LHS of Figures~\ref{fig:simulation_2D} and~\ref{fig:simulation_3D}. Table~\ref{table:simulation} reports the local radius $r$ used for each data set ($R$ is the global radius of each data set), and the statistics for misclustering rates. Typical clustering results are demonstrated in the RHS of Figures~\ref{fig:simulation_2D} and~\ref{fig:simulation_3D}. It is evident that Algorithm~\ref{alg:spectral} performs well in these simulations. \begin{table}[h!] \centering \begin{tabular}{ l c c c c c} \hline dataset &$r$ & median misclustering rate & 5\% &10\% & 15\%\\ \hline Three curves& 0.02 (0.034$R$) &4.16\%& 76 & 89 & 89 \\ Self-intersecting curves& 0.1 (0.017$R$)& 1.16\% & 85 & 85 &86\\
Two spheres& 0.2 (0.059$R$)&3.98\% &100&100&100\\
Mobius strips& 0.1 (0.028$R$)&2.22\% &85&86&88\\ Monkey saddle& 0.1 (0.069$R$)&9.73\% &0&67&97\\ Paraboloids& 0.07 (0.048$R$)& 10.42\% &0&12&91\\ \hline \end{tabular} \caption{Choices for $r$ and misclustering statistics for the artificial data sets demonstrated in Figures~\ref{fig:simulation_2D} and~\ref{fig:simulation_3D}. The statistics are based on $100$ repeats and include the median misclustering rate and number of repeats where the misclustering rate is smaller than $5\%$, $10\%$ and $15\%$. \label{table:simulation}} \end{table}
\begin{figure}
\caption{Performance of Algorithm~\ref{alg:spectral} on data sets ``Three curves'' and ``Self-intersecting curves''. Left column is the input data sets, and right column demonstrates the typical clustering. }
\label{fig:simulation_2D}
\end{figure}
\begin{figure}
\caption{Performance of Algorithm~\ref{alg:spectral} on data sets ``Two spheres'', ``Mobius strips'', ``Monkey saddle'' and ``Paraboloids''. Left column is the input data sets, and right column demonstrates the typical clustering.}
\label{fig:simulation_3D}
\end{figure}
In another simulation, we show the dependence of the success of our algorithm on the intersecting angle between curves in Table~\ref{table:2curves} and Figure~\ref{fig:2curves}. Here, we fix two curves intersecting at a point, and gradually decrease the intersection angle by rotating one of them while holding the other one fixed. The angles are $\pi/2$, $\pi/4$, $\pi/6$ and $\pi/8$. From the table we can see that our algorithm performs well when the angle is $\pi/4$, but the performance deteriorates as the angle becomes smaller, and the algorithm almost always fails when the angle is $\pi/8$.
\begin{table}[h!] \centering \begin{tabular}{ l c c c c c} \hline Intersecting angle&$r$ & median misclustering rate & 5\% &10\% & 15\%\\ \hline $\pi/2$& 0.02 (0.034$R$)&2.08\% & 98 & 98 & 98\\ $\pi/4$& 0.02 (0.034$R$)&3.33\% & 92 & 94 & 94\\ $\pi/6$& 0.02 (0.034$R$)& 5.53\% & 32 & 59 & 59\\ $\pi/8$& 0.02 (0.033$R$)&27.87\% & 0 & 2 & 2 \\ \hline \end{tabular} \caption{Choices for $r$ and misclustering statistics for the instances of two intersecting curves demonstrated in \figref{2curves}. The statistics are based on $100$ repeats and include the median misclustering rate and number of repeats where the misclustering rate is smaller than $5\%$, $10\%$ and $15\%$. \label{table:2curves}} \end{table}
\begin{figure}
\caption{Performance of Algorithm~\ref{alg:spectral} on two curves intersecting at various angles $\frac{\pi}{2}$, $\frac{\pi}{4}$, $\frac{\pi}{6}$, $\frac{\pi}{8}$.}
\label{fig:2curves}
\end{figure}
\section{Discussion} \label{sec:discussion}
We distilled the ideas of~\cite{goldberg2009multi} and of~\cite{kushnir} to cluster points sampled near smooth surfaces. The key ingredient is the use of local PCA to learn about the local spread and orientation of the data, so as to use that information in an affinity when building a neighborhood graph.
In a typical stylized setting for multi-manifold clustering, we established performance bounds for the simple variants described in \algref{cov} and \algref{proj}, which essentially consist of connecting points that are close in space and orientation, and then extracting the connected components of the resulting graph. Both are shown to resolve general intersections as long as the incidence angle is strictly positive and the parameters are carefully chosen. As is commonly the case in such analyzes, our setting can be generalized to other sampling schemes, to multiple intersections, to some features of the surfaces changing with the sample size, and so on, in the spirit of~\citep{higher-order,5714241,spectral_theory}. We chose to simplify the setup as much as possible while retaining the essential features that makes resolving intersecting clusters challenging. The resulting arguments are nevertheless rich enough to satisfy the mathematically thirsty reader.
We implemented a spectral version of \algref{proj}, described in \algref{spectral}, that assumes the intrinsic dimensionality and the number of clusters are known. The resulting approach is very similar to what is offered by \cite{wang2011spectral} and \cite{Gong2012}, although it was developed independently of these works. \algref{spectral} is shown to perform well in some simulated experiments, although it is somewhat sensitive to the choice of parameters. This is the case of all other methods for multi-manifold clustering we know of and choosing the parameters automatically remains an open challenge in the field.
\section{Proofs} \label{sec:proofs}
We start with some additional notation. The ambient space is $\mathbb{R}^D$ unless noted otherwise. For a vector ${\boldsymbol v} \in \mathbb{R}^D$, $\|{\boldsymbol v}\|$ denotes its Euclidean norm and for a real matrix ${\boldsymbol M} \in \mathbb{R}^{D \times D}$, $\|{\boldsymbol M}\|$ denotes the corresponding operator norm. For a point ${\boldsymbol x} \in \mathbb{R}^D$ and $r > 0$, $B({\boldsymbol x}, r)$ denotes the open ball of center ${\boldsymbol x}$ and radius $r$, i.e., $B({\boldsymbol x}, r) = \{{\boldsymbol y} \in \mathbb{R}^D: \|{\boldsymbol y} - {\boldsymbol x}\| < r\}$. For a set $S$ and a point ${\boldsymbol x}$, define $\dist({\boldsymbol x}, S) = \inf\{\|{\boldsymbol x} - {\boldsymbol y}\| : {\boldsymbol y} \in S\}$. For two points ${\boldsymbol a}, {\boldsymbol b}$ in the same Euclidean space, ${\boldsymbol b} - {\boldsymbol a}$ denotes the vector moving ${\boldsymbol a}$ to ${\boldsymbol b}$. For a point ${\boldsymbol a}$ and a vector ${\boldsymbol v}$ in the same Euclidean space, ${\boldsymbol a} + {\boldsymbol v}$ denotes the translate of ${\boldsymbol a}$ by ${\boldsymbol v}$. We identify an affine subspace $T$ with its corresponding linear subspace, for example, when saying that a vector belongs to $T$.
For two subspaces $T$ and $T'$, of possibly different dimensions, we denote by $0 \le \theta_{\rm max}(T,T') \le \pi/2$ the largest and by $\theta_{\rm min}(T, T')$ the smallest nonzero principal angle between $T$ and $T'$ \citep{MR1061154}. When ${\boldsymbol v}$ is a vector and $T$ is a subspace, $\angle({\boldsymbol v}, T) := \theta_{\rm max}(\mathbb{R} {\boldsymbol v}, T)$ this is the usual definition of the angle between ${\boldsymbol v}$ and $T$.
For a subset $A \subset \mathbb{R}^D$ and positive integer $d$, $\operatorname{vol}_d(A)$ denotes the $d$-dimensional Hausdorff measure of $A$, and $\operatorname{vol}(A)$ is defined as $\operatorname{vol}_{\dim(A)}(A)$, where $\dim(A)$ is the Hausdorff dimension of $A$. For a Borel set $A$, let $\lambda_A$ denote the uniform distribution on $A$.
For a set $S \subset \mathbb{R}^D$ with reach at least $1/\kappa$, and ${\boldsymbol x}$ with $\dist({\boldsymbol x}, S) < 1/\kappa$, let $P_S ({\boldsymbol x})$ denote the metric projection of ${\boldsymbol x}$ onto $S$, that is, the point on $S$ closest to ${\boldsymbol x}$. Note that, if $T$ is an affine subspace, then $P_T$ is the usual orthogonal projection onto $T$. Let $\mathcal{S}_d(\kappa)$ denote the class of connected, $C^2$ and compact $d$-dimensional submanifolds without boundary embedded in $\mathbb{R}^D$, with reach at least $1/\kappa$. For a submanifold $S \in \mathbb{R}^D$, let $T_S({\boldsymbol x})$ denote the tangent space of $S$ at ${\boldsymbol x} \in S$.
We will often identify a linear map with its matrix in the canonical basis. For a symmetric (real) matrix ${\boldsymbol M}$, let $\beta_1({\boldsymbol M}) \ge \beta_2({\boldsymbol M}) \ge \cdots$ denote its eigenvalues in decreasing order.
We say that $f: \Omega \subset \mathbb{R}^D \to \mathbb{R}^D$ is $C$-Lipschitz if
$\|f({\boldsymbol x}) - f({\boldsymbol y})\| \le C \|{\boldsymbol x} - {\boldsymbol y}\|, \forall {\boldsymbol x},{\boldsymbol y} \in \Omega.$
For two reals $a$ and $b$, $a \vee b = \max(a,b)$ and $a \wedge b = \min(a,b)$. Additional notation will be introduced as needed.
\subsection{Preliminaries} \label{sec:prelim}
This section gathers a number of general results from geometry and probability. We took time to package them into standalone lemmas that could be of potential independent interest, particularly to researchers working in machine learning and computational geometry.
\subsubsection{Smooth surfaces and their tangent subspaces} The following result is on approximating a smooth surface near a point by the tangent subspace at that point. It is based on \citep[Th.~4.18(2)]{MR0110078}. \begin{lem} \label{lem:S-approx} For $S \in \mathcal{S}_d(\kappa)$, and any two points ${\boldsymbol s}, {\boldsymbol s}' \in S$, \begin{equation} \label{S-approx1}
\dist({\boldsymbol s}', T_S({\boldsymbol s})) \leq \frac{\kappa}2 \|{\boldsymbol s}' - {\boldsymbol s}\|^2, \end{equation} and when $\dist({\boldsymbol s}', T_S({\boldsymbol s})) \le 1/\kappa$, \begin{equation} \label{S-approx2}
\dist({\boldsymbol s}', T_S({\boldsymbol s})) \leq \kappa \|P_{T_S({\boldsymbol s})}({\boldsymbol s}') - {\boldsymbol s}\|^2.
\end{equation} Moreover, for ${\boldsymbol t} \in T_S({\boldsymbol s})$ such that $\|{\boldsymbol s} - {\boldsymbol t}\| \le 7/(16 \kappa)$, \begin{equation} \label{S-approx3}
\dist({\boldsymbol t}, S) \leq \kappa \|{\boldsymbol t} - {\boldsymbol s}\|^2. \end{equation} \end{lem}
\begin{proof} Let $T$ be short for $T_S({\boldsymbol s})$. \citep[Th.~4.18(2)]{MR0110078} says that \begin{equation} \label{4.18}
\dist({\boldsymbol s}' -{\boldsymbol s}, T) \le \frac{\kappa}2 \|{\boldsymbol s}' - {\boldsymbol s}\|^2. \end{equation} Immediately, we have \[
\dist({\boldsymbol s}' -{\boldsymbol s}, T) = \|{\boldsymbol s}' - P_{T}({\boldsymbol s}')\| = \dist({\boldsymbol s}', T), \] and \eqref{S-approx1} comes from that. Based on that and Pythagoras theorem, we have
\[\dist({\boldsymbol s}', T) = \|P_{T}({\boldsymbol s}') - {\boldsymbol s}'\| \leq \frac{\kappa}2 \|{\boldsymbol s}' - {\boldsymbol s}\|^2 = \frac{\kappa}2 \big(\|P_{T}({\boldsymbol s}') - {\boldsymbol s}'\|^2 + \|P_{T}({\boldsymbol s}') - {\boldsymbol s}\|^2\big),\] so that
\[\dist({\boldsymbol s}', T) \big(1 - \frac\kappa2 \dist({\boldsymbol s}', T) \big) \le \frac{\kappa}2 \|P_{T}({\boldsymbol s}') - {\boldsymbol s}\|^2,\] and \eqref{S-approx2} follows easily from that. For \eqref{S-approx3}, let ${\boldsymbol s}' = P_{T}^{-1}({\boldsymbol t})$, which is well-defined by \lemref{proj} below and belongs to $B({\boldsymbol s}, 1/(2\kappa))$.
We then apply \eqref{S-approx2} to get
\[\dist({\boldsymbol t}, S) \le \|{\boldsymbol t} - {\boldsymbol s}'\| = \dist({\boldsymbol s}', T) \le \kappa \|{\boldsymbol t} - {\boldsymbol s}\|^2.\] \end{proof}
We need a bound on the angle between tangent subspaces on a smooth surface as a function of the distance between the corresponding points of contact. This could be deduced directly from \citep[Prop.~6.2, 6.3]{1349695}, but the resulting bound is much looser --- and the underlying proof much more complicated --- than the following, which is again based on \citep[Th.~4.18(2)]{MR0110078}.
\begin{lem} \label{lem:T-diff} For $S \in \mathcal{S}_d(\kappa)$, and any ${\boldsymbol s}, {\boldsymbol s}' \in S$, \begin{equation} \label{T-diff}
\theta_{\rm max}(T_S({\boldsymbol s}), T_S({\boldsymbol s}')) \le 2 \asin\left(\frac{\kappa}2 \|{\boldsymbol s}' - {\boldsymbol s}\| \wedge 1\right). \end{equation} \end{lem}
\begin{proof} By \eqref{4.18} applied twice, we have \[
\dist({\boldsymbol s}' -{\boldsymbol s}, T_S({\boldsymbol s})) \vee \dist({\boldsymbol s} -{\boldsymbol s}', T_S({\boldsymbol s}'))\le \frac{\kappa}2 \|{\boldsymbol s}' - {\boldsymbol s}\|^2. \] Noting that \begin{equation} \label{vT}
\dist({\boldsymbol v}, T) = \|{\boldsymbol v}\| \sin \angle({\boldsymbol v}, T), \end{equation} for any vector ${\boldsymbol v}$ and any linear subspace $T$, we get \[
\sin \angle({\boldsymbol s}' -{\boldsymbol s}, T_S({\boldsymbol s})) \ \vee \ \sin \angle({\boldsymbol s}' -{\boldsymbol s}, T_S({\boldsymbol s}')) \le \frac{\kappa}2 \|{\boldsymbol s}' - {\boldsymbol s}\|. \] Noting that the LHS never exceeds 1, and applying the arcsine function --- which is increasing --- on both sides, yields \[
\angle({\boldsymbol s}' -{\boldsymbol s}, T_S({\boldsymbol s})) \vee \angle({\boldsymbol s}' -{\boldsymbol s}, T_S({\boldsymbol s}')) \le \asin\left(\frac{\kappa}2 \|{\boldsymbol s}' - {\boldsymbol s}\| \wedge 1\right). \] We then use the triangle inequality \[ \theta_{\rm max}(T_S({\boldsymbol s}), T_S({\boldsymbol s}')) \le \angle({\boldsymbol s}' -{\boldsymbol s}, T_S({\boldsymbol s})) + \angle({\boldsymbol s}' -{\boldsymbol s}, T_S({\boldsymbol s}')), \] and conclude. \end{proof}
Below we state some properties of a projection onto a tangent subspace. A result similar to the first part was proved in \citep[Lem.~2]{higher-order} based on results in \citep{1349695}, but the arguments are simpler here and the constants are sharper. \begin{lem} \label{lem:proj} Take $S \in \mathcal{S}_d(\kappa)$, ${\boldsymbol s} \in S$ and $r \le \frac1{2\kappa}$, and let $T$ be short for $T_S({\boldsymbol s})$. $P_{T}$ is injective on $B({\boldsymbol s}, r) \cap S$ and its image contains $B({\boldsymbol s}, r') \cap T$, where $r' := (1 - \frac12 (\kappar)^2) r$. Moreover, $P_{T}^{-1}$ has Lipschitz constant bounded by $1 + \frac{64}{49} (\kappar)^2$ over $B({\boldsymbol s}, r) \cap T$, for any $r \le \frac7{16 \kappa}$. \end{lem}
\begin{proof} Take ${\boldsymbol s}', {\boldsymbol s}'' \in S$ distinct such that $P_{T}({\boldsymbol s}') = P_{T}({\boldsymbol s}'')$. Equivalently, ${\boldsymbol s}'' - {\boldsymbol s}'$ is perpendicular to $T_S({\boldsymbol s})$. Let $T'$ be short for $T_S({\boldsymbol s}')$. By \eqref{4.18} and \eqref{vT}, we have \[
\angle({\boldsymbol s}'' - {\boldsymbol s}', T') \le \asin\left(\frac{\kappa}2 \|{\boldsymbol s}'' - {\boldsymbol s}'\| \wedge 1\right), \] and by \eqref{T-diff}, \[
\theta_{\rm max}(T, T') \le 2 \asin\left(\frac{\kappa}2 \|{\boldsymbol s}' - {\boldsymbol s}\| \wedge 1\right). \] Now, by the triangle inequality, \[ \angle({\boldsymbol s}'' - {\boldsymbol s}', T') \ge \angle({\boldsymbol s}'' - {\boldsymbol s}', T) - \theta_{\rm max}(T, T') = \frac\pi2 - \theta_{\rm max}(T, T'), \] so that \[
\asin\left(\frac{\kappa}2 \|{\boldsymbol s}'' - {\boldsymbol s}'\| \wedge 1\right) \ge \frac\pi2 - 2 \asin\left(\frac{\kappa}2 \|{\boldsymbol s}' - {\boldsymbol s}\| \wedge 1\right). \]
When $\|{\boldsymbol s}' - {\boldsymbol s}\| \le 1/\kappa$, the RHS is bounded from below by $\pi/2 - 2 \asin(1/2) $, which then implies that $\frac{\kappa}2 \|{\boldsymbol s}'' - {\boldsymbol s}'\| \ge \sin(\pi/2 - 2 \asin(1/2)) = 1/2$, that is, $\|{\boldsymbol s}'' - {\boldsymbol s}'\| \ge 1/\kappa$. This precludes the situation where ${\boldsymbol s}', {\boldsymbol s}'' \in B({\boldsymbol s}, 1/(2\kappa))$, so that $P_{T}$ is injective on $B({\boldsymbol s}, r)$ when $r \le 1/(2\kappa)$.
The same arguments imply that $P_T$ is an open map on $R := B({\boldsymbol s}, r) \cap S$. In particular, $P_T(R)$ contains an open ball in $T$ centered at ${\boldsymbol s}$ and $P_T(\partial R) = \partial P_T(R)$, with $\partial R = S \cap \partial B({\boldsymbol s}, r)$ since $\partial S = \emptyset$. Now take any ray out of ${\boldsymbol s}$ within $T$, which is necessarily of the form ${\boldsymbol s} + \mathbb{R} {\boldsymbol v}$, where ${\boldsymbol v}$ is a unit vector in $ T$. Let ${\boldsymbol t}_a = {\boldsymbol s} + a {\boldsymbol v} \in T$ for $a \in [0, \infty)$. Let $a_*$ be the infimum over all $a > 0$ such that ${\boldsymbol t}_a \in P_T(R)$. Note that $a_* > 0$ and ${\boldsymbol t}_{a_*} \in P_T(\partial R)$, so that there is ${\boldsymbol s}_* \in \partial R$ such that $P_T({\boldsymbol s}_*) = {\boldsymbol t}_{a_*}$. Let ${\boldsymbol s}(a) = P_T^{-1}({\boldsymbol s} + a {\boldsymbol v})$, which is well-defined on $[0,a_*]$ by definition of $a_*$ and the fact that $P_T$ is injective on $R$. We have that $\dot{\boldsymbol s}(a) = D_{{\boldsymbol t}_a} P_T^{-1} {\boldsymbol v}$ is the unique vector in $T_a := T_S(P_T^{-1}({\boldsymbol t}_a))$ such that $P_T(\dot{\boldsymbol s}(a)) = {\boldsymbol v}$. Elementary geometry shows that \[
\|P_T (\dot{\boldsymbol s}(a))\| = \|\dot{\boldsymbol s}(a)\| \cos \angle(\dot{\boldsymbol s}(a), T) \ge \|\dot{\boldsymbol s}(a)\| \cos \theta_{\rm max}(T_a, T), \] with \[
\cos \theta_{\rm max}(T_a, T) \ge \cos\left[2 \asin\left(\frac{\kappa}2 \|{\boldsymbol s}(a) - {\boldsymbol s}\|\right) \right] \ge \zeta := 1 - \frac12 (\kappar)^2,
\]
by \eqref{T-diff}, $\|{\boldsymbol s}(a) - {\boldsymbol s}\| \le r$ and $\cos[2 \asin (x)] = 1 - 2x^2$ when $0 \le x \le 1$. Since $\|P_T (\dot{\boldsymbol s}(a))\| = \|{\boldsymbol v}\| = 1$, we have
$\|\dot{\boldsymbol s}(a)\| \le 1/\zeta,$
and this holds for all $a < a_*$. So we can extend ${\boldsymbol s}(a)$ to $[0,a_*]$ into a Lipschitz function with constant $1/\zeta$. Together with the fact that ${\boldsymbol s}_* \in \partial B({\boldsymbol s}, r)$, this implies that \[
r = \|{\boldsymbol s}_* - {\boldsymbol s}\| = \|{\boldsymbol s}(a_*) - {\boldsymbol s}(0)\| \le \frac1\zeta \|a_* {\boldsymbol v}\| = \frac{ a_*}\zeta.
\] Hence, $a_* \ge \zeta r$ and therefore $P_T(R)$ contains $B({\boldsymbol s}, \zeta r) \cap T$ as stated.
For the last part, fix $r < \frac{7}{16}\kappa$, so there is a unique $h < 1/(2\kappa)$ such that $\zeta h = r$, where $\zeta$ is redefined as $\zeta := 1 - \frac12 (\kappa h)^2$. Take ${\boldsymbol t}' \in B({\boldsymbol s}, r) \cap T$ and let ${\boldsymbol s}' = P_T^{-1}({\boldsymbol t}')$ and $T' = T_S({\boldsymbol s}')$. We saw that $P_T^{-1}$ is Lipschitz with constant $1/\zeta$ on any ray from ${\boldsymbol s}$ of length $r$, so that $\|{\boldsymbol s}' - {\boldsymbol s}\| \le (1/\zeta) \|{\boldsymbol t}' - {\boldsymbol s}\| \le r/\zeta = h$. The differential of $P_{T}$ at ${\boldsymbol s}'$ is $P_T$ itself, seen as a linear map between $T'$ and $T$. Then for any vector ${\boldsymbol u} \in T'$, we have \[
\|P_T ({\boldsymbol u})\| = \|{\boldsymbol u}\| \cos \angle({\boldsymbol u}, T) \ge \|{\boldsymbol u}\| \cos \theta_{\rm max}(T', T), \] with \[
\cos \theta_{\rm max}(T', T) \ge \cos\left[2 \asin\left(\frac{\kappa}2 \|{\boldsymbol s}' - {\boldsymbol s}\|\right) \right] \ge 1 - \frac12 (\kappa h)^2 = \zeta, \]
as before. Hence, $\| D_{{\boldsymbol t}'} P_T^{-1} \| \le 1/\zeta$, and we proved this for all ${\boldsymbol t}' \in B({\boldsymbol s}, r) \cap T$. Since that set is convex, we can apply Taylor's theorem and get that $P_T^{-1}$ is Lipschitz on that set with constant $1/\zeta$. We then have \[1/\zeta \le 1 + (\kappa h)^2 \le 1 + \frac{64}{49} (\kappa r)^2,\] because $\kappa h \le 1/2$ and $r = \zeta h \ge 7h/8$. \end{proof}
\subsubsection{Volumes and uniform distributions}
Below is a result that quantifies how much the volume of a set changes when applying a Lipschitz map. This is well-known in measure theory and we only provide a proof for completeness.
\begin{lem} \label{lem:Lip-vol} Suppose $\Omega$ is a measurable subset of $\mathbb{R}^D$ and $f: \Omega \subset \mathbb{R}^D \to \mathbb{R}^D$ is $C$-Lipschitz. Then for any measurable set $A \subset \Omega$ and real $d > 0$, $\operatorname{vol}_d(f(A)) \le C^d \operatorname{vol}_d(A)$. \end{lem}
\begin{proof} By definition, \[\operatorname{vol}_d(A) = \lim_{t \to 0} \ V_d^t(A), \qquad V_d^t(A) : = \inf_{(R_i) \in \mathcal{R}^t(A)} \ \sum_{i \in \mathbb{N}} \diam(R_i)^d,\] where $\mathcal{R}^t(A)$ is the class of countable sequences $(R_i : i \in \mathbb{N})$ of subsets of $\mathbb{R}^D$ such that $A \subset \bigcup_i R_i$ and $\diam(R_i) < t$ for all $i$. Since $f$ is $C$-Lipschitz, $\diam(f(R)) \le C \diam(R)$ for any $R \subset \Omega$. Hence, for any $(R_i) \in \mathcal{R}^t(A)$, $(f(R_i)) \in \mathcal{R}^{C t}(f(A))$. This implies that \[V_d^{Ct}(f(A)) \le \sum_{i \in \mathbb{N}} \diam(f(R_i))^d \le C^d \sum_{i \in \mathbb{N}} \diam(R_i)^d.\] Taking the infimum over $(R_i) \in \mathcal{R}^t(A)$, we get $V_d^{Ct}(f(A)) \le C^d V_d^{t}(A)$, and we conclude by taking the limit as $t \to 0$, noticing that $V_d^{Ct}(f(A)) \to \operatorname{vol}(f(A))$. \end{proof}
We compare below two uniform distributions. For two Borel probability measures $P$ and $Q$ on $\mathbb{R}^D$, ${\rm TV}(P, Q)$ denotes their total variation distance, meaning, \[
{\rm TV}(P, Q) = \sup\{|P(A) - Q(A)| : A \text{ Borel set}\}. \] Remember that for a Borel set $A$, $\lambda_A$ denotes the uniform distribution on $A$.
\begin{lem} \label{lem:2unif} Suppose $A$ and $B$ are two Borel subsets of $\mathbb{R}^D$. Then \[ {\rm TV}(\lambda_A, \lambda_B) \le 4 \ \frac{\operatorname{vol}(A \, \triangle \, B)}{\operatorname{vol}(A \cup B)}. \] \end{lem}
\begin{proof} If $A$ and $B$ are not of same dimension, say $\dim(A) > \dim(B)$, then ${\rm TV}(\lambda_A, \lambda_B) = 1$ since $\lambda_A(B) = 0$ while $\lambda_B(B) = 1$. And we also have \[\operatorname{vol}(A \, \triangle \, B) = \operatorname{vol}_{\dim(A)}(A \, \triangle \, B) = \operatorname{vol}_{\dim(A)}(A) = \operatorname{vol}(A),\] and \[\operatorname{vol}(A \cup B) = \operatorname{vol}_{\dim(A)}(A \cup B) = \operatorname{vol}_{\dim(A)}(A) = \operatorname{vol}(A),\] in both cases because $\operatorname{vol}_{\dim(A)}(B) = 0$. So the result works in that case.
Therefore assume that $A$ and $B$ are of same dimension. Assume WLOG that $\operatorname{vol}(A) \ge \operatorname{vol}(B)$. For any Borel set $U$, \[\lambda_A(U) - \lambda_B(U) = \frac{\operatorname{vol}(A \cap U)}{\operatorname{vol}(A)} - \frac{\operatorname{vol}(B \cap U)}{\operatorname{vol}(B)},\] so that \begin{eqnarray*}
|\lambda_A(U) - \lambda_B(U)|
&\le& \frac{|\operatorname{vol}(A \cap U) - \operatorname{vol}(B \cap U)|}{\operatorname{vol}(A)} + \operatorname{vol}(B \cap U) \left|\frac1{\operatorname{vol}(A)} - \frac1{\operatorname{vol}(B)}\right| \\
&\le& \frac{\operatorname{vol}(A \, \triangle \, B)}{\operatorname{vol}(A)} + \frac{\operatorname{vol}(B \cap U)}{\operatorname{vol}(B)} \frac{|\operatorname{vol}(A) - \operatorname{vol}(B)|}{\operatorname{vol}(A)} \\ &\le& \frac{2 \operatorname{vol}(A \, \triangle \, B)}{\operatorname{vol}(A)}, \end{eqnarray*} and we conclude with the fact that $\operatorname{vol}(A \cup B) \le \operatorname{vol}(A) + \operatorname{vol}(B) \le 2 \operatorname{vol}(A)$. \end{proof}
We now look at the projection of the uniform distribution on a neighborhood of a surface onto a tangent subspace. For a Borel probability measure $P$ and measurable function $f: \mathbb{R}^D \to \mathbb{R}^D$, $P^f$ denotes the push-forward (Borel) measure defined by $P^f(A) = P(f^{-1}(A))$.
\begin{lem} \label{lem:TV-map} Suppose $A \subset \mathbb{R}^D$ is Borel and $f: A \to \mathbb{R}^D$ is invertible on $f(A)$, and that both $f$ and $f^{-1}$ are $C$-Lipschitz. Then \[ {\rm TV}(\lambda_A^f, \lambda_{f(A)}) \le 8 (C^{\dim(A)} -1). \] \end{lem}
\begin{proof} First, note that $A$ and $f(A)$ are both of same dimension, and that $C \ge 1$ necessarily. Let $d$ be short for $\dim(A)$. Take $U \subset f(A)$ Borel and let $V = f^{-1}(U)$. Then \[\lambda_A^f(U) = \frac{\operatorname{vol}( A \cap V)}{\operatorname{vol}(A)}, \qquad \lambda_{f(A)}(U) = \frac{\operatorname{vol}( f(A) \cap U)}{\operatorname{vol}(f(A))},\] \[
|\lambda_A^f(U) - \lambda_{f(A)}(U)|
\le \frac{|\operatorname{vol}(A \cap V) - \operatorname{vol}(f(A) \cap U)|}{\operatorname{vol}(A)} + \frac{|\operatorname{vol}(A) - \operatorname{vol}(f(A))|}{\operatorname{vol}(A)}. \] $f$ being invertible, we have $f(A \cap V) = f(A) \cap U$ and $f^{-1}(f(A) \cap U) = A \cap V$. Therefore, applying \lemref{Lip-vol}, we get \[ C^{-d} \le \frac{\operatorname{vol}(f(A) \cap U)}{\operatorname{vol}(A \cap V)} \le C^d, \] so that
\[|\operatorname{vol}(A \cap V) - \operatorname{vol}(f(A) \cap U)| \le (C^{d} -1) \operatorname{vol}(A \cap V) \le (C^{d} -1) \operatorname{vol}(A).\] Similarly,
\[ |\operatorname{vol}(A) - \operatorname{vol}(f(A))| \le (C^d -1) \operatorname{vol}(A).\] We then conclude with \lemref{2unif}. \end{proof}
Now comes a technical result on the intersection of a smooth surface and a ball.
\begin{lem} \label{lem:TV} There is a constant $\Clref{TV} \ge 3$ depending only on $d$ such that the following is true. Take $S \in \mathcal{S}_d(\kappa)$, $r < \frac1{\Clref{TV} \kappa}$ and ${\boldsymbol x} \in \mathbb{R}^D$ such that $\dist({\boldsymbol x}, S) < r$. Let ${\boldsymbol s} = P_S({\boldsymbol x})$ and $T = T_S({\boldsymbol s})$. Then
\[\operatorname{vol}\big(P_T(S \cap B({\boldsymbol x}, r)) \, \triangle \, (T \cap B({\boldsymbol x}, r))\big) \le \Clref{TV} (\|{\boldsymbol x} -{\boldsymbol s}\| + r^2) \, \operatorname{vol}(T \cap B({\boldsymbol x}, r)).\] \end{lem}
\begin{proof}
Let $A_r = B({\boldsymbol s}, r)$, $B_r = B({\boldsymbol x}, r)$ and $g = P_{T}$ for short. Note that $T \cap B_r = T \cap A_{r_0}$ where $r_0 := (r^2 - \delta^2)^{1/2}$ and $\delta := \|{\boldsymbol x} -{\boldsymbol s}\|$. Take ${\boldsymbol s}_1 \in S \cap B_r$ such that $g({\boldsymbol s}_1)$ is farthest from ${\boldsymbol s}$, so that $g(S \cap B_r) \subset A_{r_1}$ where $r_1 := \|{\boldsymbol s} - g({\boldsymbol s}_1)\|$ --- note that $r_1 \le r$. Let $\ell_1 = \|{\boldsymbol s}_1 - g({\boldsymbol s}_1)\|$ and ${\boldsymbol y}_1$ be the orthogonal projection of ${\boldsymbol s}_1$ onto the line $({\boldsymbol x}, {\boldsymbol s})$. By Pythagoras theorem, we have $ \|{\boldsymbol x} - {\boldsymbol s}_1\|^2 = \|{\boldsymbol x} - {\boldsymbol y}_1\|^2 + \|{\boldsymbol y}_1 - {\boldsymbol s}_1\|^2$. We have $ \|{\boldsymbol x} - {\boldsymbol s}_1\| \le r$ and $ \|{\boldsymbol y}_1 - {\boldsymbol s}_1\| = \|{\boldsymbol s} - g({\boldsymbol s}_1)\| = r_1$. And because $\ell_1 \le \kappa r_1^2 < r$ by \eqref{S-approx2}, either ${\boldsymbol y}_1$ is between ${\boldsymbol x}$ and ${\boldsymbol s}$, in which case $\|{\boldsymbol x} - {\boldsymbol y}_1\| = \delta - \ell_1$, or ${\boldsymbol s}$ is between ${\boldsymbol x}$ and ${\boldsymbol y}_1$, in which case $\|{\boldsymbol x} - {\boldsymbol y}_1\| = \delta + \ell_1$. In any case, $r^2 \ge r_1^2 + (\delta - \ell_1)^2$, which together with $\ell_1 \le \kappa r_1^2$ implies $r_1^2 \le r^2 - \delta^2 + 2 \delta \ell_1 \le r_0^2 + 2\kappar_1^2 \delta$, leading to $r_1 \le (1-2\kappa\delta)^{-1/2} r_0 \le (1 + 4\kappa \delta) r_0$ after noticing that $\delta \le r < 1/(3\kappa)$. From $g(S \cap B_r) \subset T \cap A_{r_1}$, we get \begin{eqnarray*} \operatorname{vol}\big(g(S \cap B_r) \setminus (T \cap B_r)\big) &\le& \operatorname{vol}(T \cap A_{r_1}) - \operatorname{vol}(T \cap A_{r_0}) \\ &=& ((r_1/r_0)^d - 1) \operatorname{vol}(T \cap A_{r_0}). \end{eqnarray*}
We follow similar arguments to get a sort of reverse relationship. Take ${\boldsymbol s}_2 \in S \cap B_r$ such that $g(S \cap B_r) \supset T \cap A_{r_2}$, where $r_2 := \|{\boldsymbol s} - g({\boldsymbol s}_2)\|$ is largest. Assuming $r$ is small enough, by \lemref{proj}, $g^{-1}$ is well-defined on $T \cap A_r$, so that necessarily ${\boldsymbol s}_2 \in \partial B_r$. Let $\ell_2 = \|{\boldsymbol s}_2 - g({\boldsymbol s}_2)\|$ and ${\boldsymbol y}_2$ be the orthogonal projection of ${\boldsymbol s}_2$ onto the line $({\boldsymbol x}, {\boldsymbol s})$. By Pythagoras theorem, we have $ \|{\boldsymbol x} - {\boldsymbol s}_2\|^2 = \|{\boldsymbol x} - {\boldsymbol y}_2\|^2 + \|{\boldsymbol y}_2 - {\boldsymbol s}_2\|^2$. We have $ \|{\boldsymbol x} - {\boldsymbol s}_2\| = r$ and $ \|{\boldsymbol y}_2 - {\boldsymbol s}_2\| = \|{\boldsymbol s} - g({\boldsymbol s}_2)\| = r_2$. And by the triangle inequality, $\|{\boldsymbol x} - {\boldsymbol y}_2\| \le \|{\boldsymbol x} - {\boldsymbol s}\| + \|{\boldsymbol y}_2 - {\boldsymbol s}\| = \delta + \ell_2$. Hence, $r^2 \le r_2^2 + (\delta + \ell_2)^2$, which together with $\ell_2 \le \kappa r_2^2$ by \eqref{S-approx2}, implies $r_2^2 \ge r^2 - \delta^2 - 2 \delta \ell_2 - \ell_2^2 \ge r_0^2 - (2 \delta + \kappa r^2) \kappar_2^2$, leading to $r_2 \ge (1+2\kappa\delta+\kappa^2 r^2)^{-1/2} r_0 \ge (1 - 2 \kappa \delta - \kappa^2 r^2) r_0$. From $g(S \cap B_r) \supset T \cap A_{r_2}$, we get \begin{eqnarray*} \operatorname{vol}\big((T \cap B_r) \setminus g(S \cap B_r)\big) &\le& \operatorname{vol}(T \cap A_{r_0}) - \operatorname{vol}(T \cap A_{r_2}) \\ &=& (1 - (r_2/r_0)^d) \operatorname{vol}(T \cap A_{r_0}). \end{eqnarray*}
All together, we have \begin{eqnarray*} \operatorname{vol}\big(g(S \cap B_r) \, \triangle \, (T \cap B_r)\big) &\le& \big((r_1/r_0)^d - (r_2/r_0)^d\big) \ \operatorname{vol}(T \cap A_{r_0}) \\ &\le& \big( (1 + 4\kappa \delta)^d - (1 - 2 \kappa \delta - \kappa^2 r^2)^d\big) \operatorname{vol}((T \cap B_r)), \end{eqnarray*} with $(1 + 4\kappa r)^d - (1 - 4\kappa r)^d \le C (\delta + r^2)$ when $\delta \le r \le 1/(3 \kappa)$, for a constant $C$ depending only on $d$ and $\kappa$. The result follows from this. \end{proof}
We bound below the $d$-volume of a the intersection of a ball with a smooth surface. Though it could be obtained as a special case of \lemref{TV}, we provide a direct proof because this result is at the cornerstone of many results in the literature on sampling points uniformly on a smooth surface.
\begin{lem} \label{lem:ball-vol} Suppose $S \in \mathcal{S}_d(\kappa)$. Then for any ${\boldsymbol s} \in S$ and $r < \frac1{(d \vee 3)\kappa}$, we have \[1 - 2d\kappar \le \frac{\operatorname{vol}(S \cap B({\boldsymbol s}, r))}{\operatorname{vol}(T \cap B({\boldsymbol s}, r))} \le 1 + 2d\kappar,\] where $T := T_S({\boldsymbol s})$ is the tangent subspace of $S$ at ${\boldsymbol s}$. \end{lem}
\begin{proof} Let $T = T_S({\boldsymbol s})$, $B_r = B({\boldsymbol s}, r)$ and $g = P_{T}$ for short. By \lemref{proj}, $g$ is bi-Lipschitz with constants $(1+\kappar)^{-1}$ and 1 on $S \cap B_r$, so by \lemref{Lip-vol} we have \[ (1+\kappar)^{-d} \le \frac{\operatorname{vol}(g(S \cap B_r))}{\operatorname{vol}(S \cap B_r)} \le 1. \] That $g^{-1}$ is Lipschitz with constant $1 + \kappa r$ on $g(S \cap B_r)$ also implies that $g(S \cap B_r)$ contains $T \cap B_{r'}$ where $r' := r/(1+\kappar)$. From this, and the fact that $g(S \cap B_r) \subset T \cap B_r$, we get \begin{equation} \label{vol1} 1 \le \frac{\operatorname{vol}(T \cap B_r)}{\operatorname{vol}(g(S \cap B_r))} \le \frac{\operatorname{vol}(T \cap B_r)}{\operatorname{vol}(T \cap B_{r'})} = \frac{r^d}{{r'}^d} = (1+\kappar)^{d}. \end{equation}
We therefore have \[ \operatorname{vol}(S \cap B_r) \ge \operatorname{vol}(g(S \cap B_r)) \ge (1+\kappa r)^{-d} \operatorname{vol}(T \cap B_r), \] and \[ \operatorname{vol}(S \cap B_r) \le (1+\kappar)^{d} \operatorname{vol}(g(S \cap B_r)) \le (1+\kappar)^{d} \operatorname{vol}(T \cap B_r). \] And we conclude with the inequality $(1+x)^d \le 1 + 2dx$ valid for any $x \in [0,1/d]$ and any $d \ge 1$. \end{proof}
We now look at the density of a sample from the uniform on a smooth, compact surface.
\begin{lem} \label{lem:N-size} There is a constant $\Clref{N-size}>0$ such that the following is true. If $S \in \mathcal{S}_d(\kappa)$ and we sample $n$ points ${\boldsymbol s}_1, \dots, {\boldsymbol s}_n$ independently and uniformly at random from $S$, and if $0 < r < 1/(\Clref{N-size} \kappa)$, then with probability at least $1 - \Clref{N-size} r^{-d} \exp(- n r^d/\Clref{N-size})$, any ball of radius $r$ with center on $S$ has between $n r^d/\Clref{N-size}$ and $\Clref{N-size} n r^d$ sample points.
\end{lem}
\begin{proof}
For a set $R$, let $N(R)$ denote the number of sample points in $R$. For any $R$ measurable, $N(R) \sim \text{Bin}(n, p_R),$ where $p_R := \operatorname{vol}(R \cap S)/\operatorname{vol}(S)$. Let ${\boldsymbol x}_1, \dots, {\boldsymbol x}_m$ be an $(r/2)$-packing of $S$, and let $B_i = B({\boldsymbol x}_j, r/4) \cap S$. For any ${\boldsymbol s} \in S$, there is $j$ such that $\|{\boldsymbol s} - {\boldsymbol x}_j\| \le r/2$, which implies $B_i \subset B({\boldsymbol s}, r)$ by the triangle inequality. Hence, $\min_{{\boldsymbol s} \in S} N(B({\boldsymbol s}, r)) \ge \min_i N(B_i)$.
By the fact that $B_i \cap B_j = \emptyset$ for $i \ne j$, \[ \operatorname{vol}(S) \ge \sum_{i=1}^m \operatorname{vol}(B_i) \ge m \min_i \operatorname{vol}(B_i), \] and assuming that $r$ is small enough that we may apply \lemref{ball-vol}, we have \[ \min_i \operatorname{vol}(B_i) \ge \frac{\omega_d}2 (r/4)^d, \] where $\omega_d$ is the volume of the $d$-dimensional unit ball. This leads to $m \le C r^{-d}$ and $p := \min_i \, p_{B_i} \ge r^d/C$, where $C > 0$ depends only on $S$.
Now, applying Bernstein's inequality to the binomial distribution, we get \begin{equation} \label{Ni} \pr{N(B_i) \le n p/2} \le \pr{N(B_i) \le n p_{B_i}/2} \le e^{-(3/32) n p_{B_i}} \le e^{-(3/32) n p}. \end{equation} We follow this with the union bound, to get \[ \pr{\min_{{\boldsymbol s} \in S} N(B({\boldsymbol s}, r)) \le n r^d/(2C)} \le m e^{-(3/32) np} \le\ C r^{-d} e^{-\frac{3}{32 C} n r^d}. \] From this the lower bound follows. The proof of the upper bound is similar. \end{proof}
Next, we bound the volume of the symmetric difference between two balls. \begin{lem} \label{lem:2ball-vol} Take ${\boldsymbol x}, {\boldsymbol y} \in \mathbb{R}^d$ and $0 < \delta \le 1$. Then \[
\frac{\operatorname{vol}(B({\boldsymbol x}, \delta) \, \triangle \, B({\boldsymbol y}, 1))}{2 \operatorname{vol}(B(0, 1))} \le 1- (1 - \|{\boldsymbol x} - {\boldsymbol y}\|)_+^d \wedge \delta^d. \] \end{lem}
\begin{proof}
It suffices to prove the result when $\|{\boldsymbol x} - {\boldsymbol y}\| < 1$. In that case, with $\gamma := (1 - \|{\boldsymbol x} - {\boldsymbol y}\|) \wedge \delta$, we have $B({\boldsymbol x}, \gamma) \subset B({\boldsymbol x}, \delta) \cap B({\boldsymbol y}, 1)$, so that \begin{eqnarray*} \operatorname{vol}(B({\boldsymbol x}, \delta) \, \triangle \, B({\boldsymbol y}, 1)) &=& \operatorname{vol}(B({\boldsymbol x}, \delta)) + \operatorname{vol}(B({\boldsymbol y}, 1)) - 2 \operatorname{vol}(B({\boldsymbol x}, \delta) \cap B({\boldsymbol y}, 1)) \\ &\le& 2 \operatorname{vol}(B({\boldsymbol y}, 1)) - 2 \operatorname{vol}(B({\boldsymbol x}, \gamma)) \\ &=& 2 \operatorname{vol}(B({\boldsymbol y},1)) (1 - \gamma^d). \end{eqnarray*} \end{proof}
\subsubsection{Covariances}
The result below describes explicitly the covariance matrix of the uniform distribution over the unit ball of a subspace. \begin{lem} \label{lem:unif-cov} Let $T$ be a subspace of dimension $d$. Then the covariance matrix of the uniform distribution on $T \cap B(0,1)$ (seen as a linear map) is equal to $c P_T$, where $c:=\frac1{d+2}$.
\end{lem}
\begin{proof}
Assume WLOG that $T = \mathbb{R}^d \times \{0\}$. Let $X$ be distributed according to the uniform distribution on $T \cap B(0,1)$ and let $R = \|X\|$. Note that \[\pr{R \le r} = \frac{\operatorname{vol}(T \cap B(0,r))}{\operatorname{vol}(T \cap B(0,1))} = r^d, \quad \forall r \in [0,1].\] By symmetry, $\operatorname{\mathbb{E}}(X_i X_j) = 0$ if $i \ne j$, while \[ \operatorname{\mathbb{E}}(X_1^2) = \frac1d \operatorname{\mathbb{E}}(X_1^2 + \dots + X_d^2) = \frac1d \operatorname{\mathbb{E}}(R^2) = \frac1d \int_0^1 r^2 \cdot d r^{d-1} {\rm d}r = \frac1{d+2}. \] This is exactly the representation of $\frac1{d+2} P_T$ in the canonical basis of $\mathbb{R}^D$. \end{proof}
We now show that a bound on the total variation distance between two compactly supported distributions implies a bound on the difference between their covariance matrices. For a measure $P$ on $\mathbb{R}^D$ and an integrable function $f$, let $P(f)$ denote the integral of $f$ with respect to $P$, that is, \[ P(f) = \int f(x) P(dx), \] and let $\operatorname{\mathbb{E}}(P) = P({\boldsymbol x})$ and $\operatorname{Cov}(P) = P({\boldsymbol x} {\boldsymbol x}^\top) - P({\boldsymbol x}) P({\boldsymbol x})^\top$ denote the mean and covariance matrix of $P$, respectively.
\begin{lem} \label{lem:TV-cov} Suppose $\lambda$ and $\nu$ are two Borel probability measures on $\mathbb{R}^d$ supported on $B(0,1)$. Then \[
\|\operatorname{\mathbb{E}}(\lambda) - \operatorname{\mathbb{E}}(\nu)\| \le \sqrt{d} \, {\rm TV}(\lambda, \nu), \qquad \|\operatorname{Cov}(\lambda) - \operatorname{Cov}(\nu)\| \le 3 d \, {\rm TV}(\lambda, \nu). \] \end{lem}
\begin{proof}
Let $f_k({\boldsymbol t}) = t_k$ when ${\boldsymbol t} = (t_1, \dots, t_d)$, and note that $|f_k({\boldsymbol t})| \le 1$ for all $k$ and all ${\boldsymbol t} \in B(0,1)$. By the fact that \[
{\rm TV}(\lambda, \nu) = \sup\{\lambda(f) - \nu(f) : f: \mathbb{R}^d \to \mathbb{R} \text{ measurable with } |f| \le 1\}, \] we have
\[|\lambda(f_k) - \nu(f_k)| \le {\rm TV}(\lambda, \nu), \quad \forall k = 1, \dots, d.\] Therefore, \[
\|\operatorname{\mathbb{E}}(\lambda) - \operatorname{\mathbb{E}}(\nu)\|^2 = \sum_{k=1}^d (\lambda(f_k) - \nu(f_k))^2 \le d \, {\rm TV}(\lambda, \nu)^2, \] which proves the first part.
Similarly, let $f_{k\ell}({\boldsymbol t}) = t_k t_\ell$. Since $|f_{k\ell}({\boldsymbol t})| \le 1$ for all $k, \ell$ and all ${\boldsymbol t} \in B(0,1)$, we have
\[|\lambda(f_{k\ell}) - \nu(f_{k\ell})| \le {\rm TV}(\lambda, \nu), \quad \forall k, \ell = 1, \dots, d.\] Since for any probability measure $\mu$ on $\mathbb{R}^d$, \[\operatorname{Cov}(\mu) = \big(\mu(f_{k\ell}) - \mu(f_k)\mu(f_\ell) : k, \ell = 1, \dots, d\big),\] we have \begin{eqnarray*}
\|\operatorname{Cov}(\lambda) - \operatorname{Cov}(\nu)\|
&\le& d \, \max_{k, \ell} \big( |\lambda(f_{k\ell}) - \nu(f_{k\ell})| + |\lambda(f_{k}) \lambda(f_{\ell}) - \nu(f_k)\nu(f_\ell)| \big) \\
&\le& d \max_{k, \ell} \big(|\lambda(f_{k\ell}) - \nu(f_{k\ell})| + |\lambda(f_{k})| |\lambda(f_{\ell}) - \nu(f_\ell)| + |\nu(f_{\ell})| |\lambda(f_{k}) - \nu(f_k)|\big) \\ &\le& 3 d \, {\rm TV}(\lambda, \nu),
\end{eqnarray*} using the fact that $|\lambda(f_k)| \le 1$ and $|\nu(f_k)| \le 1$ for all $k$. \end{proof}
Next we compare the covariance matrix of the uniform distribution on a small piece of smooth surface with that of the uniform distribution on the projection of that piece onto a nearby tangent subspace.
\begin{lem} \label{lem:U-approx} There is a constant $\Clref{U-approx}>0$ depending only on $d$ such that the following is true. Take $S \in \mathcal{S}_d(\kappa)$, $r < \frac1{\Clref{U-approx} \kappa}$ and ${\boldsymbol x} \in \mathbb{R}^D$ such that $\dist({\boldsymbol x}, S) \le r$. Let ${\boldsymbol s} = P_S({\boldsymbol x})$ and $T = T_S({\boldsymbol s})$. If ${\boldsymbol\zeta}$ and ${\boldsymbol\xi}$ are the means, and ${\boldsymbol M}$ and ${\boldsymbol N}$ are the covariance matrices of the uniform distributions on $S \cap B({\boldsymbol x}, r)$ and $T \cap B({\boldsymbol x}, r)$ respectively, then
\[\|{\boldsymbol\zeta} - {\boldsymbol\xi}\| \le \Clref{U-approx} \kappa r^{2},\qquad \|{\boldsymbol M} - {\boldsymbol N}\| \le \Clref{U-approx} \kappa r^{3}.\] \end{lem}
\begin{proof} We focus on proving the bound on the covariances, and leave the bound on the means --- whose proof is both similar and simpler --- as an exercise to the reader. Let $T = T_S({\boldsymbol s})$, $B_r = B({\boldsymbol x}, r)$ and $g = P_{T}$ for short. Let $A = S \cap B_r$ and $A' = T \cap B_r$. Let $X \sim \lambda_A$ and define $Y = g(X)$ with distribution denoted $\lambda_A^g$. We have \[\operatorname{Cov}(X) - \operatorname{Cov}(Y) = \frac12 \big( \operatorname{Cov}(X-Y, X+Y) + \operatorname{Cov}(X+Y, X-Y)\big),\]
where $\operatorname{Cov}(U, V) = \operatorname{\mathbb{E}}((U-{\boldsymbol\mu}_U) (V -{\boldsymbol\mu}_V)^T)$ is the cross-covariance of random vectors $U$ and $V$ with respective means ${\boldsymbol\mu}_U$ and ${\boldsymbol\mu}_V$. Note that by Jensen's inequality, the fact $\|{\boldsymbol u} {\boldsymbol v}^T\| = \|{\boldsymbol u}\| \|{\boldsymbol v}\|$ for any pair of vectors ${\boldsymbol u}, {\boldsymbol v}$, and then the Cauchy-Schwarz inequality \[
\|\operatorname{Cov}(U, V)\| \le \operatorname{\mathbb{E}}(\|U-{\boldsymbol\mu}_U\| \cdot \|V -{\boldsymbol\mu}_V\|) \le \operatorname{\mathbb{E}}(\|U-{\boldsymbol\mu}_U\|^2)^{1/2} \cdot \operatorname{\mathbb{E}}(\|V-{\boldsymbol\mu}_V\|^2)^{1/2}. \] Hence, letting ${\boldsymbol\mu}_X = \operatorname{\mathbb{E}} X$ and ${\boldsymbol\mu}_Y = \operatorname{\mathbb{E}} Y$, we have \begin{eqnarray}
\|\operatorname{Cov}(\lambda_A) - \operatorname{Cov}(\lambda_A^g)\|
&\le& \|\operatorname{Cov}(X-Y, X+Y)\| \notag \\
&\le& \operatorname{\mathbb{E}}\big[\|X - Y - {\boldsymbol\mu}_X + {\boldsymbol\mu}_Y\|^2\big]^{1/2} \operatorname{\mathbb{E}}\big[\|X+Y - {\boldsymbol\mu}_X -{\boldsymbol\mu}_Y\|^2\big]^{1/2} \notag \\
&\le& \operatorname{\mathbb{E}}\big[\|X - Y\|^2\big]^{1/2} \left(\operatorname{\mathbb{E}}\big[\|X - {\boldsymbol s}\|^2\big]^{1/2} + \operatorname{\mathbb{E}}\big[\|Y - {\boldsymbol s}\|^2\big]^{1/2} \right) \label{CovXY} \\ &\le& \frac{\kappa}2 r^2 \big(r + r) = \kappa r^3, \notag \end{eqnarray} where the third inequality is due to the triangle inequality and fact that the mean minimizes the mean-squared error, and the third to the fact that $X, Y \in B_r$ and \eqref{S-approx1}.
Assume $r < 1/((\Clref{TV} \vee d) \kappa)$. Let $\lambda_{g(A)}$ denote the uniform distribution on $g(A)$.
$\lambda_A^g$ and $\lambda_{g(A)}$ are both supported on $B_r$, so that applying \lemref{TV-cov} with proper scaling, we get
\[\|\operatorname{Cov}(\lambda_A^g) - \operatorname{Cov}(\lambda_{g(A)})\| \le 3d r^2\, {\rm TV}(\lambda_A^g, \lambda_{g(A)}).\] We know that $g$ is 1-Lipschitz, and by \lemref{proj} --- which is applicable since $\Clref{TV} \ge 3$ --- $g^{-1}$ is well-defined and is $(1 + \kappa r)$-Lipschitz on $B_r$. Hence, by \lemref{TV-map} and the fact that $\dim(A) = d$, we have \[ {\rm TV}(\lambda_A^g, \lambda_{g(A)}) \le 8 ((1 + \kappa r)^d - 1) \le 16 d \kappa r, \] using the inequality $(1+x)^d \le 1 + 2dx$, valid for any $x \in [0,1/d]$ and any $d \ge 1$.
Noting that $\lambda_{A'}$ is also supported on $B_r$, applying \lemref{TV-cov} with proper scaling, we get
\[\|\operatorname{Cov}(\lambda_{g(A)}) - \operatorname{Cov}(\lambda_{A'})\| \le 3d r^2\, {\rm TV}(\lambda_{g(A)}, \lambda_{A'}),\] with \[ {\rm TV}(\lambda_{g(A)}, \lambda_{A'}) \le 4 \frac{\operatorname{vol}(A \, \triangle \, A')}{\operatorname{vol}(A')} \le C \kappa r, \] by \lemref{2unif} and \lemref{TV}, where $C$ depends only on $d,\kappa$.
By the triangle inequality, \begin{eqnarray*}
\|{\boldsymbol M} - {\boldsymbol N}\| &=& \|\operatorname{Cov}(\lambda_{A}) - \operatorname{Cov}(\lambda_{A'})\| \\
&\le& \|\operatorname{Cov}(\lambda_{A}) - \operatorname{Cov}(\lambda_{A}^g)\| + \|\operatorname{Cov}(\lambda_{A}^g) - \operatorname{Cov}(\lambda_{g(A)})\| + \|\operatorname{Cov}(\lambda_{g(A)}) - \operatorname{Cov}(\lambda_{A'})\| \\ &\le& \kappa r^3 + 48 d^2 \kappa r^3 + C r^{3}.
\end{eqnarray*} From this, we conclude. \end{proof}
Next is a lemma on the estimation of a covariance matrix. The result is a simple consequence of the matrix Hoeffding inequality of \cite{tropp}. Note that simply bounding the operator norm by the Frobenius norm, and then applying the classical Hoeffding inequality \citep{hoeffding} would yield a bound sufficient for our purposes, but this is a good opportunity to use a more recent and sophisticated result.
\begin{lem} \label{lem:cov} Let ${\boldsymbol C}_m$ denote the empirical covariance matrix based on an i.i.d.~sample of size $m$ from a distribution on the unit ball of $\mathbb{R}^d$ with covariance $\bSigma$. Then
$$\pr{\|{\boldsymbol C}_m - \bSigma\| > t} \leq 4 d \exp\left(-\frac{mt}{16} \min\big(\frac{t}{32}, \frac{m}d\big) \right).$$ \end{lem}
\begin{proof}
Without loss of generality, we assume that the distribution has zero mean and is now supported on $B(0,2)$. Let ${\boldsymbol x}_1, \dots, {\boldsymbol x}_m$ denote the sample, with ${\boldsymbol x}_i = (x_{i,1}, \dots, x_{i,d})$. We have \[ {\boldsymbol C}_m = {\boldsymbol C}^\star_m - \frac1m \bar{{\boldsymbol x}} \bar{{\boldsymbol x}}^T, \] where \[ {\boldsymbol C}^\star_m := \frac1m \sum_{i=1}^m {\boldsymbol x}_i {\boldsymbol x}_i^T, \quad \bar{{\boldsymbol x}} := \frac1m \sum_{i=1}^m {\boldsymbol x}_i. \] Note that \[
\|{\boldsymbol C}_m - \bSigma\| \le \|{\boldsymbol C}^\star_m - \bSigma\| + \frac1m \|\bar{{\boldsymbol x}}\|^2. \] Applying the union bound and then Hoeffding's inequality to each coordinate --- which is in $[-2, 2]$ --- we get \[
\operatorname{\mathbb{P}}(\|\bar{{\boldsymbol x}}\| > t) \le \sum_{j=1}^d \operatorname{\mathbb{P}}(|\bar{x}_j| > t/\sqrt{d}) \le 2 d \exp\left(-\frac{m t^2}{8 d}\right). \] Noting that $\frac1m ({\boldsymbol x}_i {\boldsymbol x}_i^T - \bSigma), i=1,\dots,m,$ are independent, zero-mean, self-adjoint matrices with spectral norm bounded by $4/m$, we may apply the matrix Hoeffding inequality \citep[Th.~1.3]{tropp}, we get \[
\pr{\|{\boldsymbol C}^\star_m - \bSigma\| > t} \le 2 d \exp\left(-\frac{t^2}{8 \sigma^2}\right), \quad \sigma^2 := m (4/m)^2 = 16/m. \]
Applying the union bound and using the previous inequalities, we arrive at \begin{eqnarray*}
\pr{\|{\boldsymbol C}_m - \bSigma\| > t}
&\le& \pr{\|{\boldsymbol C}^\star_m - \bSigma\| > t/2} + \pr{\|\bar{{\boldsymbol x}}\| > \sqrt{m t/2}} \\ &\le& 2 d \exp\left(-\frac{m t^2}{512}\right) + 2 d \exp\left(-\frac{m^2 t}{16 d}\right) \\ &\le& 4 d \exp\left(-\frac{mt}{16} \min\big(\frac{t}{32}, \frac{m}d\big) \right). \end{eqnarray*} \end{proof}
\subsubsection{Projections} We relate below the difference of two orthogonal projections with the largest principal angle between the corresponding subspaces.
\begin{lem} \label{lem:P-diff}
For two affine non-null subspaces $T,T'$,
\[\|P_{T} - P_{T'}\| = \begin{cases} \sin \theta_{\rm max}(T, T'), & \text{if } \dim(T) = \dim(T'), \\ 1, & \text{otherwise}. \end{cases} \] \end{lem}
\begin{proof} For two affine subspaces $T, T' \subset \mathbb{R}^D$ of same dimension, let $ \frac\pi2 \ge \theta_1 \ge \cdots \ge \theta_D \ge 0, $
denote the principal angles between them. By \citep[Th.~I.5.5]{MR1061154}, the singular values of $P_{T} - P_{T'}$ are $\{\sin \theta_j : j = 1, \dots, q\}$, so that $\|P_{T} - P_{T'}\| = \max_j \sin \theta_j = \sin \theta_1 = \sin \theta_{\rm max}(T, T')$. Suppose now that $T$ and $T'$ are of different dimension, say $\dim(T) > \dim(T')$. We have $\|P_{T} - P_{T'}\| \le \|P_{T}\| \vee \|P_{T'}\| = 1$, since $P_T$ and $P_{T'}$ are orthogonal projections and therefore positive semidefinite with operator norm equal to 1. Let $L = P_T(T')$. Since $\dim(L) \le \dim(T') < \dim(T)$, there is ${\boldsymbol u} \in T \cap L^\perp$ with ${\boldsymbol u} \ne 0$. Then ${\boldsymbol v}^\top {\boldsymbol u} = P_T({\boldsymbol v})^\top {\boldsymbol u} = 0$ for all ${\boldsymbol v} \in T'$, implying that $P_{T'}({\boldsymbol u}) = 0$ and consequently $(P_{T} - P_{T'}) {\boldsymbol u} = {\boldsymbol u}$, so that $\|P_{T} - P_{T'}\| \ge 1$. \end{proof}
The lemma below is a perturbation result for eigenspaces and widely known as the $\sin \Theta$ Theorem of \cite{MR0264450}. See also \citep[Th.~7]{1288832} or \citep[Th.~V.3.6]{MR1061154}. \begin{lem}[Davis and Kahan] \label{lem:davis} Let ${\boldsymbol M}$ be positive semi-definite with eigenvalues $\beta_1 \geq \beta_2 \geq \cdots$. Suppose that $\Delta_d := \beta_d -\beta_{d+1} > 0$. Then for any other positive semi-definite matrix ${\boldsymbol N}$, \[
\|P^{(d)}_{\boldsymbol N} -P^{(d)}_{\boldsymbol M}\| \leq \frac{\sqrt{2} \|{\boldsymbol N} -{\boldsymbol M}\|}{\Delta_d}, \] where $P^{(d)}_{\boldsymbol M}$ and $P^{(d)}_{\boldsymbol M}$ denote the orthogonal projections onto the top $d$ eigenvectors of ${\boldsymbol M}$ and ${\boldsymbol N}$, respectively. \end{lem}
\subsubsection{Intersections}
We start with an elementary result on points near the intersection of two affine subspaces.
\begin{lem} \label{lem:angle} Take any two linear subspaces $T_1, T_2 \subset \mathbb{R}^D$. For any point ${\boldsymbol t}_1 \in T_1 \setminus T_2$, we have \[ \dist({\boldsymbol t}_1, T_2) \ge \dist({\boldsymbol t}_1, T_1 \cap T_2) \, \sin \theta_{\rm min}(T_1, T_2). \] \end{lem}
\begin{proof} We may reduce the problem to the case where $T_1 \cap T_2 = \{0\}$. Indeed, let $\tilde{T}_1 = T_1 \cap T_2^\perp$, $\tilde{T}_2 = T_1^\perp \cap T_2$ and $\tilde{{\boldsymbol t}}_1 = {\boldsymbol t}_1 - P_{T_1 \cap T_2}({\boldsymbol t}_1)$. Then \[
\|{\boldsymbol t}_1 - P_{T_2}({\boldsymbol t}_1)\| = \|\tilde{{\boldsymbol t}}_1 - P_{\tilde{T}_2}(\tilde{{\boldsymbol t}}_1)\|, \quad \|{\boldsymbol t}_1 - P_{T_1 \cap T_2}({\boldsymbol t}_1)\| = \|\tilde{{\boldsymbol t}}_1\|, \quad \sin \theta_{\rm min}(T_1, T_2) = \sin \theta_{\rm min}(\tilde{T}_1, \tilde{T}_2). \] So assume that $T_1 \cap T_2 = \{0\}$. By \citep[Th.~10.1]{MR0094880}, the angle formed by ${\boldsymbol t}_1$ and $P_{T_2}({\boldsymbol t}_1)$ is at least as large as the smallest principal angle between $T_1$ and $T_2$, which is $\theta_{\rm min}(T_1, T_2)$ since $T_1 \cap T_2 = \{0\}$. From this the result follows immediately. \end{proof}
The following result says that a point cannot be close to two compact and smooth surfaces intersecting at a positive angle without being close to their intersection. Note that the constant there cannot be solely characterized by $\kappa$, as it also depends on the separation between the surfaces away from their intersection. \begin{lem} \label{lem:sep} Suppose $S_1, S_2 \in \mathcal{S}_d(\kappa)$ intersect at a strictly positive angle and that ${\rm reach}(S_1 \cap S_2) \ge 1/\kappa$. Then there is a constant $\Clref{sep}$ such that \begin{equation} \label{sep} \dist({\boldsymbol x}, S_1 \cap S_2) \le \Clref{sep} \max\big\{\dist({\boldsymbol x}, S_1), \dist({\boldsymbol x}, S_2) \big\}, \quad \forall {\boldsymbol x} \in \mathbb{R}^D. \end{equation} \end{lem}
\begin{proof}
Assume the result is not true, so there is a sequence $({\boldsymbol x}_n) \subset \mathbb{R}^D$ such that $\dist({\boldsymbol x}_n, S_1 \cap S_2) > n \max_k \dist({\boldsymbol x}_n, S_k)$. Because the surfaces are bounded, we may assume WLOG that the sequence is bounded. Then $\dist({\boldsymbol x}_n, S_1 \cap S_2)$ is bounded, which implies $\max_k \dist({\boldsymbol x}_n, S_k) = O(1/n)$. This also forces $\dist({\boldsymbol x}_n, S_1 \cap S_2) \to 0$. Indeed, otherwise there is a constant $C > 0$ and a subsequence $({\boldsymbol x}_{n'})$ such that $\dist({\boldsymbol x}_{n'}, S_1 \cap S_2) \ge C$. Since $({\boldsymbol x}_{n'})$ is bounded, there is a subsequence $({\boldsymbol x}_{n''})$ that converges, and by the fact that $\max_k \dist({\boldsymbol x}_{n''}, S_k) = o(1)$, and by compactness of $S_k$, the limit is necessarily in $S_1 \cap S_2$, which is a contradiction. So we have $\dist({\boldsymbol x}_n, S_1 \cap S_2) = o(1)$, implying $\max_k \dist({\boldsymbol x}_n, S_k) = o(1/n)$.
Assume $n$ is large enough that $\dist({\boldsymbol x}_n, S_1 \cap S_2) < 1/\kappa$ and let ${\boldsymbol s}_n^k$ be the projection of ${\boldsymbol x}_n$ onto $S_k$, and ${\boldsymbol s}_n^\ddag$ the projection of ${\boldsymbol x}_n$ onto $S_1 \cap S_2$. Let $T_k = T_{S_k}({\boldsymbol s}_n^\ddag)$ and note that $\theta_{\rm min}(T_1, T_2) \ge \theta$, where $\theta > 0$ is the minimum intersection angle between $S_1$ and $S_2$ defined in \eqref{theta}. Let ${\boldsymbol t}_n^k$ be the projection of ${\boldsymbol s}_n^k$ onto $T_k$. Assume WLOG that $\|{\boldsymbol t}_n^1 - {\boldsymbol s}_n^1\| \ge \|{\boldsymbol t}_n^2 - {\boldsymbol s}_n^2\|$. Let ${\boldsymbol t}_n$ denote the projection of ${\boldsymbol t}_n^1$ onto $T_1 \cap T_2$, and then let ${\boldsymbol s}_n = P_{S_1 \cap S_2}({\boldsymbol t}_n)$.
By assumption, we have \begin{equation} \label{sep1}
n \max_k \|{\boldsymbol x}_n - {\boldsymbol s}_n^k\| \le \|{\boldsymbol x}_n - {\boldsymbol s}_n^\ddag\| = o(1). \end{equation} We start with the RHS: \begin{equation} \label{sep1-1}
\|{\boldsymbol x}_n - {\boldsymbol s}_n^\ddag\|
= \min_{{\boldsymbol s} \in S_1 \cap S_2} \|{\boldsymbol x}_n - {\boldsymbol s}\|
\le \|{\boldsymbol x}_n - {\boldsymbol s}_n\|,
\end{equation} and first show that $\|{\boldsymbol x}_n - {\boldsymbol s}_n\| = o(1)$ too. We use the triangle inequality multiple times in what follows. We have \begin{equation} \label{sep2}
\|{\boldsymbol x}_n - {\boldsymbol s}_n\| \le \|{\boldsymbol x}_n - {\boldsymbol s}_n^1\| + \|{\boldsymbol s}_n^1 - {\boldsymbol t}_n^1\| + \|{\boldsymbol t}_n^1 - {\boldsymbol t}_n\| + \|{\boldsymbol t}_n - {\boldsymbol s}_n\|. \end{equation}
From \eqref{sep1}, $ \|{\boldsymbol x}_n - {\boldsymbol s}_n^1\| = o(1)$ and $ \|{\boldsymbol x}_n - {\boldsymbol s}_n^\ddag\| = o(1)$, and so that by \eqref{S-approx1}, \begin{equation} \label{sep3}
\|{\boldsymbol s}_n^1 -{\boldsymbol t}_n^1\| \le \kappa \|{\boldsymbol s}_n^1 - {\boldsymbol s}_n^\ddag\|^2 \le 2 \kappa ( \|{\boldsymbol s}_n^1 - {\boldsymbol x}_n\|^2 + \|{\boldsymbol x}_n - {\boldsymbol s}_n^\ddag\|^2) = o(1). \end{equation} We also have \begin{equation} \label{sep3-1}
\|{\boldsymbol t}_n^1 - {\boldsymbol t}_n\| = \min_{{\boldsymbol t} \in T_1 \cap T_2} \|{\boldsymbol t}_n^1 - {\boldsymbol t}\| \le \|{\boldsymbol t}_n^1 - {\boldsymbol s}_n^\ddag\| \le \|{\boldsymbol t}_n^1 -{\boldsymbol s}_n^1\| + \|{\boldsymbol s}_n^1 -{\boldsymbol x}_n\| + \|{\boldsymbol x}_n -{\boldsymbol s}_n^\ddag\| = o(1), \end{equation} where the first inequality comes from ${\boldsymbol s}_n^\ddag \in T_1 \cap T_2$. Finally,
\[\|{\boldsymbol t}_n - {\boldsymbol s}_n\| = \min_{{\boldsymbol s} \in S_1 \cap S_2} \|{\boldsymbol t}_n - {\boldsymbol s}\| \le \|{\boldsymbol t}_n - {\boldsymbol s}_n^\ddag\| \le \|{\boldsymbol t}_n - {\boldsymbol t}_n^1\| + \|{\boldsymbol t}_n^1 -{\boldsymbol s}_n^\ddag\| = o(1),\] where the first inequality comes from ${\boldsymbol s}_n^\ddag \in S_1 \cap S_2$.
We now proceed. The last upper bound is rather crude. Indeed, we use \eqref{S-approx3} for $S = S_1 \cap S_2$ and ${\boldsymbol s} = {\boldsymbol s}_n^\ddag$, noting that $T_{S_1 \cap S_2}({\boldsymbol s}_n^\ddag) = T_1 \cap T_2$ and $\|{\boldsymbol t}_n - {\boldsymbol s}_n^\ddag\| = o(1)$, and get
\[\|{\boldsymbol t}_n - {\boldsymbol s}_n\| \le \kappa \|{\boldsymbol t}_n - {\boldsymbol s}_n^\ddag\|^2 \le \kappa (\|{\boldsymbol t}_n - {\boldsymbol s}_n\| + \|{\boldsymbol s}_n - {\boldsymbol x}_n\| + \|{\boldsymbol x}_n - {\boldsymbol s}_n^\ddag\|)^2. \]
We have $\|{\boldsymbol x}_n - {\boldsymbol s}_n^\ddag\| = \|{\boldsymbol x}_n - P_{S_1 \cap S_2}({\boldsymbol x}_n)\| \le \|{\boldsymbol x}_n - {\boldsymbol s}_n\|$ because ${\boldsymbol s}_n \in T_1 \cap T_2$. This leads to \begin{equation} \label{sep4}
\|{\boldsymbol t}_n - {\boldsymbol s}_n\| \le \kappa (\|{\boldsymbol t}_n - {\boldsymbol s}_n\| + 2\|{\boldsymbol s}_n - {\boldsymbol x}_n\|)^2 \le 4 \kappa \|{\boldsymbol x}_n - {\boldsymbol s}_n\|^2,
\end{equation} eventually, since $\|{\boldsymbol t}_n - {\boldsymbol s}_n\| = o(1)$.
Combining \eqref{sep2}, \eqref{sep3} and \eqref{sep4}, we get
\[\|{\boldsymbol x}_n - {\boldsymbol s}_n\| \le \|{\boldsymbol x}_n - {\boldsymbol s}_n^1\| + O(\|{\boldsymbol x}_n - {\boldsymbol s}_n^1\|^2 + \|{\boldsymbol x}_n - {\boldsymbol s}_n\|^2) + \|{\boldsymbol t}_n^1 - {\boldsymbol t}_n\| + O(\|{\boldsymbol x}_n - {\boldsymbol s}_n\|^2), \] which leads to \begin{equation} \label{sep5}
\|{\boldsymbol x}_n - {\boldsymbol s}_n\| \le 2 \|{\boldsymbol x}_n - {\boldsymbol s}_n^1\| + 2 \|{\boldsymbol t}_n^1 - {\boldsymbol t}_n\|, \end{equation} when $n$ is large enough.
Using this bound in \eqref{sep1} combined with \eqref{sep1-1}, we get
\[\|{\boldsymbol t}_n^1 - {\boldsymbol t}_n\| \ge \frac{n-2}2 \max_k \|{\boldsymbol x}_n - {\boldsymbol s}_n^k\|.\] We then have \begin{eqnarray*}
\max_k \|{\boldsymbol x}_n - {\boldsymbol s}_n^k\| &\ge& \frac12 \|{\boldsymbol s}_n^1 - {\boldsymbol s}_n^2\| \\
&\ge& \frac12 (\|{\boldsymbol t}_n^1 - {\boldsymbol t}_n^2\| - \|{\boldsymbol s}_n^1 - {\boldsymbol t}_n^1\| - \|{\boldsymbol s}_n^2 - {\boldsymbol t}_n^2\|) \\
&\ge& \frac12 \dist({\boldsymbol t}_n^1, T_2) - \|{\boldsymbol s}_n^1 - {\boldsymbol t}_n^1\|, \end{eqnarray*} with
\[\|{\boldsymbol s}_n^1 - {\boldsymbol t}_n^1\| = O(\|{\boldsymbol x}_n - {\boldsymbol s}_n^1\|^2 + \|{\boldsymbol x}_n - {\boldsymbol s}_n^\ddag\|^2) = O(\|{\boldsymbol x}_n - {\boldsymbol s}_n\|^2) = O(\|{\boldsymbol t}_n^1 - {\boldsymbol t}_n\|^2),\]
due (in the same order) to \eqref{sep3}, \eqref{sep1}-\eqref{sep1-1}, and \eqref{sep5}. Recalling that $\|{\boldsymbol t}_n^1 - {\boldsymbol t}_n\| = \dist({\boldsymbol t}_n^1, T_1 \cap T_2)$, we conclude that \[ \dist({\boldsymbol t}_n^1, T_2) = O(1/n) \dist({\boldsymbol t}_n^1, T_1 \cap T_2) + O(1) \dist({\boldsymbol t}_n^1, T_1 \cap T_2)^2. \]
However, by \lemref{angle}, $\dist({\boldsymbol t}_n^1, T_2) \ge (\sin \theta) \dist({\boldsymbol t}_n^1, T_1 \cap T_2)$, so that dividing by $\dist({\boldsymbol t}_n^1, T_2)$ above leads to $1 = O(1/n) + O(1) \dist({\boldsymbol t}_n^1, T_2)$, which is in contradiction with the fact that $\dist({\boldsymbol t}_n^1, T_2) \le \|{\boldsymbol t}_n^1 - {\boldsymbol t}_n\| = o(1)$, established in \eqref{sep3-1}. \end{proof}
\subsubsection{Covariances near an intersection}
We look at covariance matrices near an intersection. We start with a continuity result.
\begin{lem} \label{lem:cov-cont} Let $T_1$ and $T_2$ be two linear subspaces of same dimension $d$. For ${\boldsymbol x} \in T_1$, denote by $\bSigma({\boldsymbol x})$ the covariance matrix of the uniform distribution over $B({\boldsymbol x}, 1) \cap (T_1 \cup T_2)$. Then, for all ${\boldsymbol x}, {\boldsymbol y} \in T_1$, \[
\|\bSigma({\boldsymbol x}) - \bSigma({\boldsymbol y})\| \le \begin{cases}
5 d \, \|{\boldsymbol x} - {\boldsymbol y}\|, & \text{if } d \ge 2, \\
\sqrt{6 \|{\boldsymbol x} - {\boldsymbol y}\|}, & \text{if } d = 1. \end{cases} \] \end{lem}
\begin{proof} Since, by \lemref{unif-cov}, $\bSigma({\boldsymbol x}) = c P_{T_1}$ for all ${\boldsymbol x} \in T_1$ such that $\dist({\boldsymbol x}, T_2) \ge 1$, we may assume that $\dist({\boldsymbol x}, T_1) < 1$ and $\dist({\boldsymbol y}, T_1) < 1$. Let $d = \dim(T_1) = \dim(T_2)$ and $A^j_{\boldsymbol x} = B({\boldsymbol x}, 1) \cap T_j$ for any ${\boldsymbol x}$ and $j = 1,2$.
By \lemref{TV-cov} and then \lemref{2unif}, we have \begin{eqnarray*}
\|\bSigma({\boldsymbol x}) - \bSigma({\boldsymbol y})\|
&=& \|\operatorname{Cov}(\lambda_{A^1_{\boldsymbol x} \cup A^2_{\boldsymbol x}}) - \operatorname{Cov}(\lambda_{A^1_{\boldsymbol y} \cup A^2_{\boldsymbol y}})\| \\ &\le& {\rm TV}(\lambda_{A^1_{\boldsymbol x} \cup A^2_{\boldsymbol x}}, \lambda_{A^1_{\boldsymbol y} \cup A^2_{\boldsymbol y}}) \\ &\le& 4 \frac{\operatorname{vol}\big((A^1_{\boldsymbol x} \cup A^2_{\boldsymbol x}) \, \triangle \, (A^1_{\boldsymbol y} \cup A^2_{\boldsymbol y})\big)}{\operatorname{vol}\big((A^1_{\boldsymbol x} \cup A^2_{\boldsymbol x}) \cup (A^1_{\boldsymbol y} \cup A^2_{\boldsymbol y})\big)} \\ &\le& 4 \frac{\operatorname{vol}(A^1_{\boldsymbol x} \, \triangle \, A^1_{\boldsymbol y}) + \operatorname{vol}(A^2_{\boldsymbol x} \, \triangle \, A^2_{\boldsymbol y})}{\operatorname{vol}(A^1_{\boldsymbol x})}. \end{eqnarray*}
Note that $A^1_{\boldsymbol x}$ is the unit-radius ball of $T_1$ centered at ${\boldsymbol x}$, while $A^2_{\boldsymbol x}$ is the ball of $T_2$ centered at ${\boldsymbol x}_2 := P_{T_2}({\boldsymbol x})$ and of radius $\eta := \sqrt{1 - \|{\boldsymbol x} - {\boldsymbol x}_2\|^2}$. Similarly, $A^1_{\boldsymbol y}$ is the unit-radius ball of $T_1$ centered at ${\boldsymbol y}$, while $A^2_{\boldsymbol y}$ is the ball of $T_2$ centered at ${\boldsymbol y}_2 := P_{T_2}({\boldsymbol y})$ and of radius $\delta := \sqrt{1 - \|{\boldsymbol y} - {\boldsymbol y}_2\|^2}$. Therefore, applying \lemref{2ball-vol}, we get \[ \frac{\operatorname{vol}(A^1_{\boldsymbol x} \, \triangle \, A^1_{\boldsymbol y})}{2 \operatorname{vol}(A^1_{\boldsymbol x})} \le 1 - (1 - t)_+^d, \] and assuming WLOG that $\delta \le \eta$, and after proper scaling, we get \[\frac{\operatorname{vol}(A^2_{\boldsymbol x} \, \triangle \, A^2_{\boldsymbol y})}{2 \operatorname{vol}(A^1_{\boldsymbol x})} \le \zeta := \eta^d - (\eta - t_2)_+^d \wedge \delta^d,\]
where $t := \|{\boldsymbol x} - {\boldsymbol y}\|$ and $t_2 := \|{\boldsymbol x}_2 - {\boldsymbol y}_2\|$ --- note that $t_2 \le t$ by the fact that $P_{T_2}$ is 1-Lipschitz.
We have $1 - (1 - t)_+^d \le dt$. This is obvious when $t \ge 1$, while when $t \le 1$ it is obtained using the fact that, for any $0 \le s < t \le 1$, \begin{equation} \label{diff_d} t^d - s^d = (t-s)(t^{d-1} + st^{d-2} + \cdots + s^{d-2}t + s^{d-1}) \le d t^{d-1} (t-s) \le d (t - s). \end{equation} For the second ratio, we consider several cases. \begin{itemize} \item When $\eta \le t_2$, then $\zeta = \eta^d \le \eta \le t_2 \le t$. \item When $t_2 < \eta \le t_2 + \delta$, then $\zeta = \eta^d - (\eta - t_2)^d \le d t_2 \le d t$. \item When $\eta \ge t_2 + \delta$ and $d \ge 2$, we have \begin{eqnarray*} \zeta &=& \eta^d - \delta^d \le d \eta (\eta - \delta) \le d (\eta^2 - \delta^2) \\
&=& d (\|{\boldsymbol y} - {\boldsymbol y}_2\|^2 - \|{\boldsymbol x}-{\boldsymbol x}_2\|^2) \\
&=& d (\|{\boldsymbol y} - {\boldsymbol y}_2\|+ \|{\boldsymbol x}-{\boldsymbol x}_2\|)(\|{\boldsymbol y} - {\boldsymbol y}_2\|- \|{\boldsymbol x}-{\boldsymbol x}_2\|) \\ &\le& 2 d (t + t_2) \le 4d t, \end{eqnarray*} where the triangle inequality was applied in the last inequality, in the form of \[
\|{\boldsymbol y} - {\boldsymbol y}_2\| \le \|{\boldsymbol y} - {\boldsymbol x}\| + \|{\boldsymbol x}-{\boldsymbol x}_2\| + \|{\boldsymbol x}_2 - {\boldsymbol x}\| = \|{\boldsymbol x}-{\boldsymbol x}_2\| + t + t_2. \] \item When $\eta \ge t_2 + \delta$ and $d = 1$, we have \[
\zeta = \eta - \delta \le \sqrt{\|{\boldsymbol y} - {\boldsymbol y}_2\|- \|{\boldsymbol x}-{\boldsymbol x}_2\|} \le \sqrt{t + t_2} \le \sqrt{2t}, \] using the same triangle inequality and the fact that, for any $0 \le s < t \le 1$, \[ 0 \le \sqrt{1-s} - \sqrt{1-t} = \frac{t-s}{\sqrt{1-s} + \sqrt{1-t}} \le \frac{t-s}{\sqrt{1-t + t-s}} \le \frac{t-s}{\sqrt{t-s}} = \sqrt{t-s}. \]
\end{itemize} When $d\ge 2$, we can therefore bound $\|\bSigma({\boldsymbol x}) - \bSigma({\boldsymbol y})\|$ by $dt + 4dt = 5dt$, and when $d=1$, we bound that by $t + \sqrt{2t} \le \sqrt{6t}$. \end{proof}
The following is in some sense a converse to \lemref{cov-cont}, in that we lower-bound the distance between covariance matrices near an intersection of linear subspaces. Note that the covariance matrix does not change when moving parallel to the intersection; however, it does when moving perpendicular to the intersection.
\begin{lem} \label{lem:intersect} Let $T_1$ and $T_2$ be two linear subspaces of same dimension with $\theta_{\rm min}(T_1,T_2) \ge \theta_0 > 0$. Fix a unit norm vector ${\boldsymbol v} \in T_1 \cap (T_1 \cap T_2)^\perp$.
With $\bSigma(h {\boldsymbol v})$ denoting the covariance of the uniform distribution over $B(h {\boldsymbol v}, 1) \cap (T_1 \cup T_2)$, we have \[
\inf_{h} \ \sup_{\ell} \|\bSigma(h {\boldsymbol v}) - \bSigma(\ell {\boldsymbol v})\| \ge 1/\Clref{intersect}, \] where the infimum is over $0 < h < 1/\sin \theta_0$ and the supremum over $\max(0, h -1/2) \le \ell \le \min(1/\sin \theta_0, h +1/2)$, and $\Clref{intersect} >0$ depends only on $d$ and $\theta_0$. \end{lem}
\begin{proof} If the statement of the lemma is not true, there are subspaces $T_1$ and $T_2$ of same dimension $d$, a unit length vector ${\boldsymbol v} \in T_1 \cap (T_1 \cap T_2)^\perp$ and $0 \le h \le 1/\sin \theta_0$, such that \begin{equation} \label{hyp-intersect} \text{$\bSigma(\ell {\boldsymbol v}) = \bSigma(h {\boldsymbol v})$ for all $\max(0, h -1/2) \le \ell \le \min(1/\sin \theta_0, h +1/2)$.} \end{equation} By projecting onto $(T_1 \cap T_2)^\perp$, we may assume that $T_1 \cap T_2 = 0$ without loss of generality. Let $\theta = \angle({\boldsymbol v},T_2)$ and note that $\theta \ge \theta_0$ since $T_1 \cap T_2 = 0$. Define ${\boldsymbol u} = ({\boldsymbol v} - P_{T_2} {\boldsymbol v})/\sin\theta$ and also ${\boldsymbol w} = P_{T_2} {\boldsymbol v}/\cos\theta$ when $\theta < \pi/2$, and ${\boldsymbol w} \in T_2$ is any vector perpendicular to ${\boldsymbol v}$ when $\theta = \pi/2$.
$B(h {\boldsymbol v}, 1) \cap T_1$ is the $d$-dimensional ball of $T_1$ of radius 1 and center $h {\boldsymbol v}$, while --- using Pythagoras theorem --- $B(h {\boldsymbol v}, 1) \cap T_2$ is the $d$-dimensional ball of $T_2$ of radius $t := (1 - (h \sin \theta)^2)^{1/2}$ and center $(h \cos \theta) {\boldsymbol w}$. Let $X$ be drawn from the uniform distribution over $B(h {\boldsymbol v}, 1) \cap (T_1 \cup T_2)$, while $X_0$ and $X_0'$ are independently drawn from the uniform distributions over the unit balls of $T_1$ and $T_2$, respectively. By \lemref{unif-cov}, $\operatorname{Cov}(X_0) = c P_{T_1}$ and $\operatorname{Cov}(X_0') = c P_{T_2}$ where $c := 1/(d+2)$. Also, let $\xi$ be Bernoulli with parameter $\alpha$, where \[ \alpha := \frac{\operatorname{vol}(B(h {\boldsymbol v}, 1) \cap T_1)}{\operatorname{vol}(B(h {\boldsymbol v}, 1) \cap (T_1 \cup T_2))} = \frac{\operatorname{vol}(B(h {\boldsymbol v}, 1) \cap T_1)}{\operatorname{vol}(B(h {\boldsymbol v}, 1) \cap T_1) + \operatorname{vol}(B((h \cos \theta) {\boldsymbol w}, t) \cap T_2)} = \frac{1}{1 + t^d}. \] We have \[ X \sim \xi \big(h {\boldsymbol v} + X_0\big) + (1-\xi ) \big((h \cos \theta) {\boldsymbol w} + t X_0'\big). \] A straightforward calculation, or an application of the law of total covariance, leads to \begin{equation} \label{cov-mixture} \operatorname{Cov}(X) = \operatorname{\mathbb{E}}(\xi) \operatorname{Cov}(X_0) + \operatorname{\mathbb{E}}(1-\xi) t^2 \operatorname{Cov}(X_0') + \operatorname{Var}(\xi) h^2 ({\boldsymbol v} - (\cos \theta){\boldsymbol w})({\boldsymbol v} - (\cos \theta){\boldsymbol w})^\top, \end{equation} which simplifies to \[ \bSigma(h {\boldsymbol v}) = c \alpha P_{T_1} + c (1-\alpha) t^2 P_{T_2} + \alpha (1-\alpha ) (1 - t^2) {\boldsymbol u} {\boldsymbol u}^\top, \] using the fact that ${\boldsymbol v} - (\cos \theta){\boldsymbol w} = (\sin\theta) {\boldsymbol u}$ and the definition of $t$. Let $\theta_1 = \theta_{\rm max}(T_1,T_2)$ and let ${\boldsymbol v}_1 \in T_1$ be of unit length and such that $\angle({\boldsymbol v}_1, T_2) = \theta_1$. Then for any $0 \le h, \ell \le 1/\sin\theta_0$, we have \begin{equation} \label{hyp-intersect2}
\|\bSigma(h {\boldsymbol v}) - \bSigma(\ell {\boldsymbol v})\| \ge |{\boldsymbol v}_1^\top \bSigma(h {\boldsymbol v}){\boldsymbol v}_1 - {\boldsymbol v}_1^\top \bSigma(\ell {\boldsymbol v}){\boldsymbol v}_1| = |f(t_h) - f(t_\ell)|, \end{equation} where $t_h := (1 - (h\sin \theta)^2)^{1/2}$ and \begin{eqnarray*} f(t)
&=& \frac{c}{1+t^d} + \frac{c t^{d+2} (\cos \theta_1)^2}{1+t^d} + \frac{t^d(1 - t^{2}) ({\boldsymbol u}^\top {\boldsymbol v}_1)^2}{(1+t^d)^2}. \end{eqnarray*} It is easy to see that the interval \[I_h = \{t_\ell : (h -1/2)_+ \le \ell \le (1/\sin \theta_0) \wedge (h +1/2) \}\]
is non empty. Because of \eqref{hyp-intersect} and \eqref{hyp-intersect2}, $f(t)$ is constant over $t \in I_h$, but this is not possible since $f$ is a rational function not equal to a constant and therefore cannot be constant over an interval of positive length. \end{proof}
We now look at the eigenvalues of the covariance matrix. \begin{lem} \label{lem:eigen-inter} Let $T_1$ and $T_2$ be two linear subspaces of same dimension $d$. For ${\boldsymbol x} \in T_1$, denote by $\bSigma({\boldsymbol x})$ the covariance matrix of the uniform distribution over $B({\boldsymbol x}, 1) \cap (T_1 \cup T_2)$. Then, for all ${\boldsymbol x} \in T_1$, \begin{equation} c \big(1 - (1 - \delta^2({\boldsymbol x}))_+^{d/2}\big) \le \beta_d(\bSigma({\boldsymbol x})), \qquad \beta_1(\bSigma({\boldsymbol x})) \le c + \delta({\boldsymbol x}) (1 - \delta^2({\boldsymbol x}))_+^{d/2}, \label{eigen-inter1} \end{equation} \begin{equation} \frac{c}{8} (1 - \cos \theta_{\rm max}(T_1, T_2))^2 (1 - \delta^2({\boldsymbol x}))_+^{d/2+1} \le \beta_{d+1}(\bSigma({\boldsymbol x})) \le (c + \delta^2({\boldsymbol x})) (1 - \delta^2({\boldsymbol x}))_+^{d/2}, \label{eigen-inter2} \end{equation} where $c := 1/(d+2)$ and $\delta({\boldsymbol x}) := \dist({\boldsymbol x}, T_2)$. \end{lem}
\begin{proof}
As in \eqref{cov-mixture}, we have \begin{equation} \label{Sigma} \bSigma(x) = \alpha c P_{T_1} + (1-\alpha) c t^2 P_{T_2} + \alpha (1-\alpha) ({\boldsymbol x} - {\boldsymbol x}_2) ({\boldsymbol x} - {\boldsymbol x}_2)^\top, \end{equation} where ${\boldsymbol x}_2 := P_{T_2}({\boldsymbol x})$ and $\alpha := (1+t^d)^{-1}$ with $t := (1 - \delta^2({\boldsymbol x}))_+^{1/2}$. Because all the matrices in this display are positive semidefinite, we have \[
\beta_d(\bSigma({\boldsymbol x})) \ge \alpha c \|P_{T_1}\| = \alpha c, \] with $\alpha \ge 1-t^d$. And because of the triangle inequality, we have \[
\beta_1(\bSigma({\boldsymbol x})) \le \alpha c \|P_{T_1}\| + (1-\alpha) c t^2 \|P_{T_2} \| + \alpha (1-\alpha) \|{\boldsymbol x} - {\boldsymbol x}_2\|^2 \le c + \alpha (1-\alpha) \delta^2({\boldsymbol x}), \] with $\alpha (1-\alpha) \le t^d$. Hence, \eqref{eigen-inter1} is proved.
For the upper bound in \eqref{eigen-inter2}, by Weyl's inequality \citep[Cor.~IV.4.9]{MR1061154} and the fact that $\beta_{d+1}(P_{T_1}) = 0$, and then the triangle inequality, we get \begin{eqnarray*}
\beta_{d+1}(\bSigma({\boldsymbol x})) &\le& \|\bSigma({\boldsymbol x}) - \alpha c P_{T_1}\| \\
&\le& c (1-\alpha) t^2 \|P_{T_2}\| + \alpha (1-\alpha) \delta^2({\boldsymbol x}) \\ &\le& (1-\alpha) ( c + \delta^2({\boldsymbol x})), \end{eqnarray*} and we then use the fact that $1 -\alpha \le t^d$. For the lower bound, let $\theta_1 \ge \theta_2 \ge \cdots \ge \theta_d$ denote the principal angles between $T_1$ and $T_2$. By definition of principal angles, there are orthonormal bases for $T_1$ and $T_2$, denoted ${\boldsymbol v}_1, \dots, {\boldsymbol v}_d$ and ${\boldsymbol w}_1, \dots, {\boldsymbol w}_d$, such that ${\boldsymbol v}_j^\top {\boldsymbol w}_k = {\rm 1}\kern-0.24em{\rm I}_{j=k} \cdot \cos \theta_j$. Take ${\boldsymbol u} \in {\rm span}({\boldsymbol v}_1, \dots, {\boldsymbol v}_d, {\boldsymbol w}_1)$, that is, of the form ${\boldsymbol u} = a {\boldsymbol v}_1 + {\boldsymbol v} + b {\boldsymbol w}_1$, with ${\boldsymbol v} \in {\rm span}({\boldsymbol v}_2, \dots, {\boldsymbol v}_d)$. Since $P_{T_1} = {\boldsymbol v}_1 {\boldsymbol v}_1^\top + \cdots + {\boldsymbol v}_d {\boldsymbol v}_d^\top$ and $P_{T_2} = {\boldsymbol w}_1 {\boldsymbol w}_1^\top + \cdots + {\boldsymbol w}_d {\boldsymbol w}_d^\top$, we have \begin{eqnarray*} \frac1{c} {\boldsymbol u}^\top \bSigma({\boldsymbol x}) {\boldsymbol u}
&\ge& \alpha (a^2 + \|{\boldsymbol v}\|^2 + 2 a b \cos \theta_1 + b^2 \cos^2 \theta_1 ) + (1-\alpha) t^2 (b^2 + 2 a b \cos \theta_1 + a^2 \cos^2 \theta_1) \\ &=& \alpha (a + b \cos \theta_1)^2 + (1-\alpha) t^2 (a \cos \theta_1 + b)^2 + \alpha (1 - a^2 - b^2),
\end{eqnarray*} assuming $\|{\boldsymbol u}\|^2 = a^2 + \|{\boldsymbol v}\|^2 + b^2 = 1$. If $|a| \vee |b| \le 1/2$, then the RHS $\ge \alpha/2 \ge 1/4$. Otherwise, the RHS $\ge (1-\alpha)t^2 (1-\cos\theta_1)^2/4$, using the fact that $\alpha \ge 1 - \alpha \ge (1-\alpha)t^2$. Hence, by the Courant-Fischer theorem \citep[Cor.~IV.4.7]{MR1061154}, we have \[ \beta_{d+1}(\bSigma({\boldsymbol x})) \ge \frac{c}4 (1-\alpha)t^2(1-\cos\theta_1)^2, \] with $1-\alpha \ge t^d/2$. This proves \eqref{eigen-inter2}. \end{proof}
Below is a technical result on the covariance matrix of the uniform distribution on the intersection of a ball and the union of two smooth surfaces, near where the surfaces intersect. It generalizes \lemref{U-approx}.
\begin{lem} \label{lem:cov-inter} Let $S_1, S_2 \in \mathcal{S}_d(\kappa)$ intersecting at a positive angle, with ${\rm reach}(S_1 \cap S_2) \ge 1/\kappa$. Then there is a constant $\Clref{cov-inter} \ge 3$ such that the following holds. Fix $r < 1/\Clref{cov-inter}$, and for ${\boldsymbol s} \in S_1$ with $\dist({\boldsymbol s}, S_2) \le r$, let ${\boldsymbol C}({\boldsymbol s})$ and $\bSigma({\boldsymbol s})$ denote the covariance matrices of the uniform distributions over $B({\boldsymbol s}, r) \cap (S_1 \cup S_2)$ and $B({\boldsymbol s}, r) \cap (T_1 \cup T_2)$, where $T_1 := T_{S_1}({\boldsymbol s})$ and $T_2 := T_{S_2}(P_{S_2}({\boldsymbol s}))$. Then \begin{equation} \label{cov-inter}
\|{\boldsymbol C}({\boldsymbol s}) - \bSigma({\boldsymbol s})\| \le \Clref{cov-inter} \, r^{3}. \end{equation} \end{lem}
\begin{proof}
Below $C$ denotes a positive constant depending only on $S_1$ and $S_2$ that increases with each appearance. We note that it is enough to prove the result when $r$ is small enough. Take ${\boldsymbol s} \in S_1$ such that $\delta := \dist({\boldsymbol s}, S_2) \le r$ and let ${\boldsymbol s}_2 = P_{S_2}({\boldsymbol s})$ --- note that $\|{\boldsymbol s} - {\boldsymbol s}_2\| = \delta$. Let $B_r$ be short for $B({\boldsymbol s}, r)$ and define $A_k = B_r \cap S_k$, ${\boldsymbol\mu}_k = \operatorname{\mathbb{E}}(\lambda_{A_k})$ and ${\boldsymbol D}_k = \operatorname{Cov}(\lambda_{A_k})$, for $k=1,2$. As in \eqref{cov-mixture}, we have \[ {\boldsymbol C}({\boldsymbol s}) = \alpha {\boldsymbol D}_1 + (1-\alpha) {\boldsymbol D}_2 + \alpha(1-\alpha) ({\boldsymbol\mu}_1 -{\boldsymbol\mu}_2)({\boldsymbol\mu}_1 -{\boldsymbol\mu}_2)^\top, \] where \[ \alpha := \frac{\operatorname{vol}(A_1)}{\operatorname{vol}(A_1) + \operatorname{vol}(A_2)}. \] Let $T_1 = T_{S_1}({\boldsymbol s})$ and $T_2 = T_{S_2}({\boldsymbol s}_2)$, and define $A_k' = B_r \cap T_k$, so that $B_r \cap (T_1 \cup T_2) = A_1' \cup A_2'$. Note that $\operatorname{\mathbb{E}}(\lambda_{A_1'}) = {\boldsymbol s}$ and $\operatorname{\mathbb{E}}(\lambda_{A_2'}) = {\boldsymbol s}_2$, and by \lemref{unif-cov}, ${\boldsymbol D}_1' := \operatorname{Cov}(\lambda_{A_1'}) = c r^2 P_{T_1}$ and ${\boldsymbol D}_2' := \operatorname{Cov}(\lambda_{A_2'}) = c (r^2 -\delta^2)P_{T_2}$, where $c := 1/(d+2)$. As in \eqref{Sigma}, we have \[ \bSigma({\boldsymbol s}) = \alpha' {\boldsymbol D}_1' + (1- \alpha') {\boldsymbol D}_2' + \alpha' (1-\alpha') ({\boldsymbol s} -{\boldsymbol s}_2)({\boldsymbol s} -{\boldsymbol s}_2)^\top, \] where \[ \alpha' := \frac{\operatorname{vol}(A_1')}{\operatorname{vol}(A_1') + \operatorname{vol}(A_2')}. \]
Since $|\alpha' (1-\alpha') - \alpha (1-\alpha) | \le |\alpha' - \alpha|$, we have \begin{eqnarray*}
\|{\boldsymbol C}({\boldsymbol s}) - \bSigma({\boldsymbol s})\| &\le& |\alpha' - \alpha| \big(\|{\boldsymbol D}_1'\| + \|{\boldsymbol D}_2'\| + \|{\boldsymbol s} -{\boldsymbol s}_2\|^2 \big) \\
&& + \ \alpha \|{\boldsymbol D}_1 - {\boldsymbol D}_1'\| + (1- \alpha) \|{\boldsymbol D}_2 - {\boldsymbol D}_2'\| + \alpha (1-\alpha) 4 r \big(\|{\boldsymbol\mu}_1 - {\boldsymbol s}\| + \|{\boldsymbol\mu}_2 - {\boldsymbol s}_2\|\big) \\[.05in]
&\le& (2 c +1) r^2 |\alpha' - \alpha| \\
&& + \ \|{\boldsymbol D}_1 - {\boldsymbol D}_1'\| \vee \|{\boldsymbol D}_2 - {\boldsymbol D}_2'\| + 2 r \big(\|{\boldsymbol\mu}_1 - {\boldsymbol s}\| \vee \|{\boldsymbol\mu}_2 - {\boldsymbol s}_2\|\big), \end{eqnarray*} using the triangle inequality multiple times, and in the first inequality we used the fact that \[
\|{\boldsymbol v} {\boldsymbol v}^\top - {\boldsymbol w} {\boldsymbol w}^\top\| \le \|({\boldsymbol v} -{\boldsymbol w}) {\boldsymbol v}^\top\| + \|{\boldsymbol w} ({\boldsymbol v} - {\boldsymbol w})^\top\| \le (\|{\boldsymbol v}\| + \|{\boldsymbol w}\|) \|{\boldsymbol v} - {\boldsymbol w}\|, \]
for any two vectors ${\boldsymbol v}, {\boldsymbol w} \in \mathbb{R}^D$. Assuming that $\kappa r \le 1/\Clref{U-approx}$, by \lemref{U-approx}, we have $\|{\boldsymbol\mu}_1 - {\boldsymbol s}\| \vee \|{\boldsymbol\mu}_2 - {\boldsymbol s}_2\| \le \Clref{U-approx} \kappa r^{2}$ and $\|{\boldsymbol D}_1 - {\boldsymbol D}_1'\| \vee \|{\boldsymbol D}_2 - {\boldsymbol D}_2'\| \le \Clref{U-approx} \kappa r^{3}$. Assuming that $\kappa r \le 1/3$, $P_{T_k}^{-1}$ is well-defined and $(1+\kappa r)$-Lipschitz on $S_k \cap B_r$. And being an orthogonal projection, $P_{T_k}$ is 1-Lipschitz . Hence, applying \lemref{Lip-vol}, we have \[ 1 \le \frac{\operatorname{vol}(A_k)}{\operatorname{vol}(P_{T_k}(A_k))} \le 1+\kappa r, \quad k = 1,2. \] Then by \lemref{TV}, \[ 1 - \Clref{TV} \kappa r \le \frac{\operatorname{vol}(P_{T_k}(A_k))}{\operatorname{vol}(A_k')} \le 1 + \Clref{TV} \kappa r, \quad k = 1,2. \] So we get \[ 1 - C r \le \frac{\operatorname{vol}(A_k)}{\operatorname{vol}(A_k')} \le 1 + C r, \quad k = 1,2. \]
Since for all $a,b,a',b' > 0$ we have \begin{eqnarray}
\left| \frac{a}{a+b} - \frac{a'}{a'+b'} \right| &\le& \frac{|a - a'| \vee |b - b'|}{(a+b) \vee (a'+b')} \label{2frac} \\
&\le& \big|1 - a/a'| \vee |1 - b/b'|, \notag \end{eqnarray} we get \[
|\alpha - \alpha'| \le C r. \] Hence,
\[\|{\boldsymbol C}({\boldsymbol s}) - \bSigma({\boldsymbol s})\| \le C r^{3},\] so we are done with the proof. \end{proof}
\subsection{Performance guarantees for \algref{cov}}
We deal with the case where there is no noise, that is, $\tau = 0$ in \eqref{data-point}, so that the data points are ${\boldsymbol s}_1, \dots, {\boldsymbol s}_N$, sampled exactly on $S_1 \cup S_2$ according to the uniform distribution. We explain how things change when there is noise, meaning $\tau > 0$, in \secref{noise}.
Let $\Xi_i = \{j \neq i: {\boldsymbol s}_j \in N_r({\boldsymbol s}_i)\}$, with (random) cardinality $N_i = |\Xi_i|$. When there is no noise, ${\boldsymbol C}_i$ is the sample covariance of $\{{\boldsymbol s}_j : j \in \Xi_i\}$.
For $i \in [n]$, let $K_i = 1$ if ${\boldsymbol s}_i \in S_1$ and $=2$ otherwise, and let $T_i = T_{S_{K_i}}({\boldsymbol s}_i)$, which is the tangent subspace associated with data point ${\boldsymbol s}_i$. Given $N_i$, $\{{\boldsymbol s}_j : j \in \Xi_i\}$ are uniformly distributed on $S_{K_i} \cap B({\boldsymbol s}_i, r)$, and applying \lemref{cov} with rescaling, we get that for any $t > 0$ \[
\pr{\|{\boldsymbol C}_i - \operatorname{\mathbb{E}} {\boldsymbol C}_i\| > r^2 t \, \big| \, N_i} \leq 4 d \exp\left(-\frac{N_i t}{\Clref{cov}} \min\big(t, \frac{N_i}d\big) \right), \] for an absolute constant $\Clref{cov} \ge 1$. We may assume that $r < 1/(\Clref{N-size} \kappa)$ and let $n_\star := n r^d/\Clref{N-size}$. We assume throughout that $r$ is large enough that $n_\star \ge d$, for otherwise the result is void since the probability lower bound stated in \thmref{main} is negative. Using \lemref{N-size}, for any $t < 1$, \begin{eqnarray*}
\pr{\|{\boldsymbol C}_i - \operatorname{\mathbb{E}} {\boldsymbol C}_i\| > r^2 t} &\leq& \pr{\|{\boldsymbol C}_i - \operatorname{\mathbb{E}} {\boldsymbol C}_i\| > r^2 t \, \big| \, N_i \ge n_\star} + \pr{N_i < n_\star} \\ &\leq& 4 d \exp(- n_\star t^2/\Clref{cov}) + \Clref{N-size} n \exp(- n_\star) \\ &\le& (4 d + \Clref{N-size}) n \exp(- n_\star t^2/\Clref{cov}). \end{eqnarray*}
Define $\bSigma_i$ as the covariance of the uniform distribution on $T_i \cap B({\boldsymbol s}_i, r)$. Let \[
I_\star = \{i: K_j = K_i, \, \forall j \in \Xi_i\}, \] or equivalently, \[
I_\star^c = \{i: \exists j \text{ s.t. } K_j \ne K_i \text{ and } \|{\boldsymbol s}_j - {\boldsymbol s}_i\| \le r\}. \] By definition, $I_\star$ indexes the points whose neighborhoods do not contain points from the other cluster. Applying \lemref{U-approx}, this leads to \begin{equation} \label{cov-approx}
\|\operatorname{\mathbb{E}} {\boldsymbol C}_i - \bSigma_i\| \le \Clref{U-approx} \kappa r^3, \quad \forall i \in I_\star. \end{equation}
Define the events \[
\Omega_1 = \bigcup_{k = 1}^2 \big\{\forall {\boldsymbol s} \in S_k: \, \# \{i: K_i = k \text{ and } {\boldsymbol s}_i \in B({\boldsymbol s}, r/C_\Omega)\} > n_\star \big\},
\] where $C_\Omega := 100 d^2 \Clref{intersect}^2$, and \[
\Omega_2 = \left\{\|{\boldsymbol C}_i - \operatorname{\mathbb{E}} {\boldsymbol C}_i\| \le r^2 t, \text{ for all } i \in [n]\right\}, \] and their intersection $\Omega = \Omega_1 \cap \Omega_2$, where $t < 1$ will be determined later. Note that, under $\Omega_1$, $N_i \ge n_\star$. Applying the union bound, \begin{eqnarray*} \operatorname{\mathbb{P}}(\Omega^c) &\le& \operatorname{\mathbb{P}}(\Omega_1^c) + \operatorname{\mathbb{P}}(\Omega_2^c) \\ &\le& \Clref{N-size} n \exp(- n_\star) + n (4 d + \Clref{N-size}) \exp(- n_\star t^2/\Clref{cov}) \\ &\le& p_\Omega := (4 d + 2 \Clref{N-size}) n \exp(- n_\star t^2/\Clref{cov}). \end{eqnarray*} Assuming that $\Omega$ holds, by the triangle inequality, \eqref{cov-noise} and \eqref{cov-approx}, we have \begin{equation} \label{keybound}
\|{\boldsymbol C}_i - \bSigma_i\| \le \|{\boldsymbol C}_i - \operatorname{\mathbb{E}} {\boldsymbol C}_i\| + \|\operatorname{\mathbb{E}} {\boldsymbol C}_i - \bSigma_i\| \le \zeta r^2, \quad \forall i \in I_\star, \end{equation} where \begin{equation} \label{zeta} \zeta := t + \Clref{U-approx} \kappa r. \end{equation}
The inequality \eqref{keybound} leads, via the triangle inequality, to the decisive bound \begin{equation} \label{decisive}
\|{\boldsymbol C}_i - {\boldsymbol C}_j\| \le \|\bSigma_i - \bSigma_j\| + 2\zeta r^2, \qquad \forall i, j \in I_\star. \end{equation}
Take $i, j \in I_\star$ such that $K_i = K_j$ and $\|{\boldsymbol s}_i -{\boldsymbol s}_j\| \le \varepsilon$. Then by \lemref{unif-cov} and \lemref{P-diff}, property \eqref{T-diff} and the fact that $\sin(2\theta) \le 2 \sin \theta$ for all $\theta$, and the triangle inequality, we have \begin{equation} \label{eta-choice1}
\frac1{c r^2} \|\bSigma_i - \bSigma_j\| = \sin \theta_{\rm max}(T_i, T_j) \le 2 \kappa \|{\boldsymbol s}_i - {\boldsymbol s}_j\| \le 2 \kappa\varepsilon, \end{equation} where $c := 1/(d+2)$. This implies that \begin{equation} \label{eta-choice1S}
\frac1{r^2} \|{\boldsymbol C}_i - {\boldsymbol C}_j\| \le 2 c \kappa \varepsilon + 2 \zeta. \end{equation} Therefore, if $\eta > 2 c \kappa \varepsilon + 2\zeta$, then any pair of points indexed by $i, j \in I_\star$ from the same cluster and within distance $\varepsilon$ are direct neighbors in the graph built by \algref{cov}.
Take $i, j \in I_\star$ such that $K_i \ne K_j$ and $\|{\boldsymbol s}_i -{\boldsymbol s}_j\| \le \varepsilon$. By \lemref{sep}, \[
\max\big[ \dist({\boldsymbol s}_i, S_1 \cap S_2), \dist({\boldsymbol s}_j, S_1 \cap S_2) \big] \le \Clref{sep} \|{\boldsymbol s}_i -{\boldsymbol s}_j\|. \] Let ${\boldsymbol z}$ be the mid-point of ${\boldsymbol s}_i$ and ${\boldsymbol s}_j$. By convexity and the display above, \[ \dist({\boldsymbol z}, S_1 \cap S_2) \le \frac12 \dist({\boldsymbol s}_i, S_1 \cap S_2) + \frac12 \dist({\boldsymbol s}_j, S_1 \cap S_2) \le \Clref{sep} \varepsilon. \] Assuming $\Clref{sep} \varepsilon < 1/\kappa$, let ${\boldsymbol s} = P_{S_1 \cap S_2} ({\boldsymbol z})$. Then, by the triangle inequality again, \[
\max\big[ \|{\boldsymbol s} - {\boldsymbol s}_i\| , \|{\boldsymbol s} - {\boldsymbol s}_j\| \big] \le \dist({\boldsymbol z}, S_1 \cap S_2) + \frac12 \|{\boldsymbol s}_i - {\boldsymbol s}_j\| \le \Clref{sep} \varepsilon + \frac12 \varepsilon \le (\Clref{sep}+1) \varepsilon. \] Let $T'_i$ denote the tangent subspace of $S_{K_i}$ at ${\boldsymbol s}$ and let $\bSigma'_i$ be the covariance of the uniform distribution over $T'_i \cap B({\boldsymbol s}, r)$. Define $T'_j$ and $\bSigma'_j$ similarly. Then, as in \eqref{eta-choice1} we have \[
\frac1{c r^2} \|\bSigma_i - \bSigma'_i\| \le \kappa \|{\boldsymbol s}_i - {\boldsymbol s}\| \le \kappa (\Clref{sep}+1) \varepsilon, \] and similarly, \[
\frac1{c r^2} \|\bSigma_j - \bSigma'_j\| \le \kappa (\Clref{sep}+1) \varepsilon. \] \def\theta_{\scriptscriptstyle S}{\theta_{\scriptscriptstyle S}} Moreover, by \lemref{unif-cov} and \lemref{P-diff}, \[
\frac1{c r^2} \|\bSigma'_i - \bSigma'_j\| = \sin \theta_{\rm max}(T'_i, T'_j) \ge \sin \theta_{\scriptscriptstyle S}, \] where $\theta_{\scriptscriptstyle S}$ is short for $\theta(S_1, S_2)$. Hence, by the triangle inequality, \begin{equation} \label{eta-choice2}
\frac1{c r^2} \|\bSigma_i - \bSigma_j\| \ge \sin \theta_{\scriptscriptstyle S} - 2 \kappa (\Clref{sep}+1) \varepsilon, \end{equation} and then \begin{equation} \label{eta-choice2S}
\frac1{r^2} \|{\boldsymbol C}_i - {\boldsymbol C}_j\| \ge c \sin \theta_{\scriptscriptstyle S} - 2c \kappa (\Clref{sep}+1) \varepsilon - 2 \zeta. \end{equation} Therefore, if $\eta < c \sin \theta_{\scriptscriptstyle S} - 2c \kappa (\Clref{sep}+1) \varepsilon - 2 \zeta$, then any pair of points indexed by $i, j \in I_\star$ from different clusters are {\em not} direct neighbors in the graph built by \algref{cov}.
In summary, we would like to choose $\eta$ such that
\[ 2 c \kappa \varepsilon + 2\zeta < \eta < c\sin \theta_{\scriptscriptstyle S} - 2 c\kappa (\Clref{sep}+1) \varepsilon - 2 \zeta. \]
This holds when \[ 2 c \kappa \varepsilon + 2\zeta < \eta < \frac{c \sin \theta_{\scriptscriptstyle S}}{\Clref{sep}+2}, \] which is true when \begin{equation} \label{choice} \varepsilon < \frac{(d+2) \eta}{6 \kappa}, \quad t \le \frac\eta6, \quad r \le \frac\eta{6\Clref{U-approx} \kappa}, \quad \eta < \frac{\sin \theta_{\scriptscriptstyle S}}{(\Clref{sep}+2)(d+2)}, \end{equation} using the definition of $\zeta$ in \eqref{zeta} and that of $c = 1/(d+2)$.
We choose $t = \eta/6$ and get that $\operatorname{\mathbb{P}}(\Omega^c) \le C n \exp(- n r^d \eta^2 / C)$, where $C$ depends only on $d$ and $\Clref{N-size}$.
\subsubsection{The different clusters are in different connected components} \label{sec:different-cc} We show that Step~3 in \algref{cov} eliminates all points $i \notin I_\star$, implying by our choice of parameters in \eqref{choice} that after that step the two clusters are not connected to each other in the graph. Hence, take $i \notin I_\star$ with $K_i = 1$ (say), so that $\dist({\boldsymbol s}_i, S_2) \le r$. By \lemref{sep}, we have $\dist({\boldsymbol s}_i, S_1 \cap S_2) \le \Clref{sep} r < 1/\kappa$. Assuming that $(\Clref{sep} + 1) r < 1/\kappa$, let ${\boldsymbol s}^0 = P_{S_1 \cap S_2}({\boldsymbol s}_i)$ and define $T^0_k = T_{S_k}({\boldsymbol s}^0)$
Below, $C > 0$ is a constant whose value increases with each appearance. By \lemref{cov-inter} (and the notation there), for ${\boldsymbol s} \in S_1$ such that $\dist({\boldsymbol s}, S_2) \le (\Clref{sep} + 1) r$,
\[\|{\boldsymbol C}({\boldsymbol s}) - \bSigma({\boldsymbol s})\| \le C \, r^{3}.\] We now derive another approximation that involves $\bSigma^0({\boldsymbol s})$, the covariance matrix of the uniform distribution on $B(P_{T_1^0}({\boldsymbol s}), r) \cap (T_1^0 \cup T_2^0)$. For that, we continue with the notation used in the proof of \lemref{cov-inter} until \eqref{2nd-approx} below.
Define ${\boldsymbol t}_1 = P_{T_1^0}({\boldsymbol s})$ and ${\boldsymbol t}_2 = P_{T^0_2}({\boldsymbol t}_1)$. Let $\delta_0 = \|{\boldsymbol t}_1 - {\boldsymbol t}_2\|$, $\delta_1 = \|{\boldsymbol s} - {\boldsymbol t}_1\|$, $\delta_2 = \|{\boldsymbol s}_2 - {\boldsymbol t}_2\|$ and $A_k^0 = T_k^0 \cap B({\boldsymbol t}_1,r)$. By \lemref{S-approx}, we have $\delta_1 \le C r^2$ and $\delta_2 \le C r^2$, because $\|{\boldsymbol s} - {\boldsymbol s}^0\| \le C r$ by \lemref{sep}, and
\[\|{\boldsymbol s}_2 - {\boldsymbol s}^0\| \le \|{\boldsymbol s}_2 - {\boldsymbol s}\| + \|{\boldsymbol s} - {\boldsymbol s}^0\|.\]
Hence, $|\delta_0 - \delta| \le \delta_1 + \delta_2 \le C r^2.$ We assume that $r$ is small enough that $C r^2 < r$, so that $A_1^0 \ne \emptyset$. Note that $\operatorname{\mathbb{E}}(\lambda_{A^0_k}) = {\boldsymbol t}_k$ and ${\boldsymbol D}_1^0 := \operatorname{Cov}(\lambda_{A_1^0}) = c r^2 P_{T_1^0}$, while ${\boldsymbol D}_2^0 := \operatorname{Cov}(\lambda_{A_2^0}) = c (r^2 -\delta_0^2) P_{T_2^0}$ when $\delta_0 \le r$; otherwise $A_2^0 = \emptyset$. As in \eqref{Sigma}, we have \begin{equation} \label{C0} \bSigma^0({\boldsymbol s}) = \alpha^0 {\boldsymbol D}_1^0 + (1- \alpha^0) {\boldsymbol D}_2^0 + \alpha^0 (1-\alpha^0) ({\boldsymbol t}_1 -{\boldsymbol t}_2)({\boldsymbol t}_1 -{\boldsymbol t}_2)^\top, \end{equation} where \[ \alpha^0 := \frac{\operatorname{vol}(A_1^0)}{\operatorname{vol}(A_1^0) + \operatorname{vol}(A_2^0)}. \] This identity remains valid even when $A_2^0 = \emptyset$. As in the proof of \lemref{cov-inter}, we have
\[\|\bSigma({\boldsymbol s}) - \bSigma^0({\boldsymbol s})\| \le (2 c +1) r^2 |\alpha' - \alpha^0| \\
+ \ \|{\boldsymbol D}_1' - {\boldsymbol D}_1^0\| \vee \|{\boldsymbol D}_2' - {\boldsymbol D}_2^0\| + 2 r \big(\|{\boldsymbol t}_1 - {\boldsymbol s}\| \vee \|{\boldsymbol t}_2 - {\boldsymbol s}_2\|\big).\]
By the triangle inequality and the fact that $\|P_T\| \le 1$ for any subspace $T$, \[
\|{\boldsymbol D}_1' - {\boldsymbol D}_1^0\| \le c r^2 \|P_{T_1} - P_{T^0_1}\|, \] and \[
\|{\boldsymbol D}_2' - {\boldsymbol D}_2^0\| \le c r^2 \|P_{T_2} - P_{T_2^0}\| + c |\delta^2 - {\delta_0}^2|. \] By \lemref{T-diff} and \lemref{P-diff}, we have \[
\|P_{T_1} - P_{T_1^0}\| \le \kappa \|{\boldsymbol s} - {\boldsymbol s}^0\| \le C r, \qquad \|P_{T_2} - P_{T_2^0}\| \le \kappa \|{\boldsymbol s}_2 - {\boldsymbol s}^0\| \le C r. \]
And since $|\delta^2 - {\delta_0}^2| \le 2 r |\delta - {\delta_0}| \le C r^3$, we have
$\|{\boldsymbol D}_k' - {\boldsymbol D}_k^0\| \le C r^3$ for $k=1,2$. Let $\omega_d$ denote the volume of the $d$-dimensional unit ball. Then \[ \operatorname{vol}(A_1') = \omega_d r^d, \quad \operatorname{vol}(A_2') = \omega_d (r^2 -\delta^2)^{d/2}, \quad \operatorname{vol}(A_1^0) = \omega_d r^2, \quad \operatorname{vol}(A_2^0) = \omega_d (r^2 -{\delta_0}^2)_+^{d/2}, \] so that
\begin{eqnarray*}
|\alpha' - \alpha^0|
&=& \big| \frac1{1 + (1 - \delta/r)^{d/2}} - \frac1{1 + (1 - \delta_0/r)_+^{d/2}}\big| \\
&\le& \big|(1 - \delta/r)^{d/2} - (1 - \delta_0/r)_+^{d/2}\big|.
\end{eqnarray*}
Proceeding exactly as when we bounded $\zeta$ in the proof of \lemref{cov-cont}, we get \[
|\alpha' - \alpha^0| \le d \sqrt{|\delta - \delta_0|/r} \le C \sqrt{r}. \] Hence, we proved that
\[\|\bSigma({\boldsymbol s}) - \bSigma^0({\boldsymbol s})\| \le Cr^{5/2}.\] We conclude with the triangle inequality that \begin{equation} \label{2nd-approx}
\|{\boldsymbol C}({\boldsymbol s}) - \bSigma^0({\boldsymbol s})\| \le \|{\boldsymbol C}({\boldsymbol s}) - \bSigma({\boldsymbol s})\| + \|\bSigma({\boldsymbol s}) - \bSigma^0({\boldsymbol s})\| \le C r^{5/2}.
\end{equation} Again, this holds for any ${\boldsymbol s} \in S_1$ such that $\|{\boldsymbol s}-{\boldsymbol s}^0\| \le (\Clref{sep} + 1) r$.
Assuming that ${\boldsymbol s}_i \ne {\boldsymbol s}^0$ (which is true with probability one) and ${\boldsymbol s}^0 = 0$, let $h = \|{\boldsymbol s}_i - {\boldsymbol s}^0\|$ and ${\boldsymbol v} = ({\boldsymbol s}_i - {\boldsymbol s}^0)/h$. Note that ${\boldsymbol s}_i = h {\boldsymbol v}$. Because ${\boldsymbol v} \perp T_1^0 \cap T_2^0$, and that $\theta_{\rm min}(T_1^0, T_2^0) \ge \theta_{\scriptscriptstyle S}$, we apply \lemref{intersect} with scaling to find $\ell \in h \pm r/2$ such that $\|\bSigma^0(\ell {\boldsymbol v}) - \bSigma^0(h {\boldsymbol v})\| \ge r^2 \Clref{intersect}$, where $\Clref{intersect} > 0$ depends only on $\theta_{\scriptscriptstyle S}$ and $d$. Letting $\tilde{{\boldsymbol s}} = \ell {\boldsymbol v}$, we have $\|\tilde{{\boldsymbol s}} - {\boldsymbol s}_i\| = |h -\ell| \le r/2$, so that \[\dist(\tilde{{\boldsymbol s}}, S_1 \cap S_2) \le \dist({\boldsymbol s}_i, S_1 \cap S_2) + r/2 < (\Clref{sep} + 1/2) r < 1/\kappa,\] and consequently, $P_{S_1 \cap S_2}(\tilde{{\boldsymbol s}}) = {\boldsymbol s}^0$, by \citep[Th 4.8(12)]{MR0110078}. Hence, by the triangle inequality, \begin{eqnarray*}
\|{\boldsymbol C}({\boldsymbol s}_i) - {\boldsymbol C}(\tilde{{\boldsymbol s}})\|
&\ge& \|\bSigma^0({\boldsymbol s}) - \bSigma^0(\tilde{{\boldsymbol s}})\| - \|{\boldsymbol C}({\boldsymbol s}_i) - \bSigma^0({\boldsymbol s}_i)\| - \|{\boldsymbol C}(\tilde{{\boldsymbol s}}) - \bSigma^0(\tilde{{\boldsymbol s}})\| \\ &\ge& r^2/\Clref{intersect} - 2 C r^{5/2}. \end{eqnarray*}
We apply the same arguments but now coupled with \lemref{cov-cont} after applying a proper scaling, to get \begin{eqnarray*}
\|{\boldsymbol C}(\bar{{\boldsymbol s}}) - {\boldsymbol C}(\tilde{{\boldsymbol s}})\|
&\le& \|\bSigma^0(\bar{{\boldsymbol s}}) - \bSigma^0(\tilde{{\boldsymbol s}})\| + \|{\boldsymbol C}(\bar{{\boldsymbol s}}) - \bSigma^0(\bar{{\boldsymbol s}})\| + \|{\boldsymbol C}(\tilde{{\boldsymbol s}}) - \bSigma^0(\tilde{{\boldsymbol s}})\| \\
&\le& 5d r^{3/2} \sqrt{\|\bar{{\boldsymbol s}} - \tilde{{\boldsymbol s}}\|} + 2 C r^{5/2},
\end{eqnarray*} for any $\bar{{\boldsymbol s}} \in S_1$ such that $\|\bar{{\boldsymbol s}} - \tilde{{\boldsymbol s}}\| \le r/2$, since this implies that $\|\bar{{\boldsymbol s}}-{\boldsymbol s}^0\| \le (\Clref{sep} + 1) r$. Hence, when $\|\bar{{\boldsymbol s}} - \tilde{{\boldsymbol s}}\| \le r/C_\Omega$ and $r^{1/2} \le 1/(16 C \Clref{intersect})$, we have \[
\|{\boldsymbol C}(\bar{{\boldsymbol s}}) - {\boldsymbol C}({\boldsymbol s}_i)\| \ge r^2 \big(1/\Clref{intersect} - 5d \sqrt{1/C_\Omega} - 4 C r^{1/2}\big) \ge r^2/(4\Clref{intersect}). \]
Now, under $\Omega$, there is ${\boldsymbol s}_j \in S_1 \cap B(\bar{{\boldsymbol s}}, r/C_\Omega)$, so that $\|{\boldsymbol C}_j - {\boldsymbol C}_i\| \ge r^2/(4\Clref{intersect})$. Therefore, choosing $\eta$ such that $\eta < 1/(4\Clref{intersect})$, we see that $\|{\boldsymbol C}_j - {\boldsymbol C}_i\| > \eta$, even though $\|{\boldsymbol s}_j - {\boldsymbol s}_i\| \le r/2 + r/C_\Omega \le r$.
\subsubsection{Each cluster forms a connected component in the graph} \label{sec:same-cc} We show that the points that survive Step~3 and belong to the same cluster form a connected component in the graph, except for possible spurious points near the intersection. Take, for example, the cluster generated from sampling $S_1$. The danger is that Step~3 created a ``crevice" within this cluster wide enough to disconnect it. We show this is not the case. (Note that the crevice may be made of several disconnected pieces.)
Before we start, we recall that $I_\star^c$ was eliminated in Step~3, so that by our choice of $\eta$ in \eqref{choice}, to show that two points ${\boldsymbol s}_i, {\boldsymbol s}_j$ sampled from $S_1$ are neighbors it suffices to show that $\|{\boldsymbol s}_i - {\boldsymbol s}_j\| \le \varepsilon$.
We first bound the width of the crevice. Let $I_\circ = \{i \in I_\star: \Xi_i \subset I_\star\}$. By our choice of parameters in \eqref{choice}, we see that $i \in I_\circ$ is neighbor with any $j \in \Xi_i$, so that $i$ survives Step~3. Hence, the nodes removed at Step~3 are in $I_\ddag := \{i: \Xi_i \cap I_\star^c \ne \emptyset\}$, with the possibility that some nodes in $I_\ddag$ survive. Now, for any $i \notin I_\star$, there is $j$ with $K_{j} \ne K_i$ such that $\|{\boldsymbol s}_i - {\boldsymbol s}_j\| \le r$, so by \lemref{sep}, \[
\dist({\boldsymbol s}_i, S_1 \cap S_2) \le \Clref{sep} \|{\boldsymbol s}_i - {\boldsymbol s}_j\| \le \Clref{sep} r. \] By the triangle inequality, this implies that $\dist({\boldsymbol s}_i, S_1 \cap S_2) \le r_1 := (\Clref{sep}+1) r$ for all $i \in I_\ddag$. So the crevice is along $S_1 \cap S_2$ and of width bounded by $r_1$. We will require that $\varepsilon$ is sufficiently larger than $r_1$, which intuitive will ensure that the crevice is not wide enough to disconnect the subgraph corresponding to $I_\circ$. Let $R := S_1 \setminus B(S_1 \cap S_2, r_2)$, $r_2 = r_1 + r = (\Clref{sep}+2) r$, so that any ${\boldsymbol s}_i \in S_1$ such that $\dist({\boldsymbol s}_i, R) < r$ survives Step~3.
Take two adjacent connected components of $R$, denoted $R_1$ and $R_2$. We show that there is at least one pair $j_1, j_2$ of direct neighbors in the graph such that ${\boldsymbol s}_{j_m} \in R_m$. Take ${\boldsymbol s}$ on the connected component of $S_1 \cap S_2$ separating $R_1$ and $R_2$. Let $T^k = T_{S_k}({\boldsymbol s})$ and let $H$ be the affine subspace parallel to $(T^1 \cap T^2)^\perp$ passing through ${\boldsymbol s}$. Take ${\boldsymbol t}^m \in P_{T^1}(R_m) \cap H \cap \partial B({\boldsymbol s}, \varepsilon_1)$, where $\varepsilon_1 := \varepsilon/2$, and define ${\boldsymbol s}^m = P_{T^1}^{-1}({\boldsymbol t}^m)$. Note that here ${\boldsymbol t}^1, {\boldsymbol t}^2 \in T^1$ and ${\boldsymbol s}^1, {\boldsymbol s}^2 \in S_1$, and by \citep[Th 4.8(12)]{MR0110078}, $P_{S_1 \cap S_2}({\boldsymbol t}^m) = {\boldsymbol s}$. \lemref{proj} not only justifies this construction when $\kappa \varepsilon_1 < 1/3$, it also says that $P_{T^1}^{-1}$ has Lipschitz constant bounded by $1+\kappa \varepsilon_1$, which implies that
\[\|{\boldsymbol s}^m - {\boldsymbol s}\| \le (1 + \kappa \varepsilon_1) \|{\boldsymbol t}^m -{\boldsymbol s}\| = (1 + \kappa \varepsilon_1) \varepsilon_1 \le \varepsilon/3,\] when $\varepsilon$ is sufficiently small. We also have \begin{eqnarray*}
\dist({\boldsymbol s}^m, S_1 \cap S_2) &\ge& \dist({\boldsymbol t}^m, S_1 \cap S_2) - \|{\boldsymbol s}^m - {\boldsymbol t}^m\| \\
&=& \|{\boldsymbol t}^m - {\boldsymbol s}\| - \|{\boldsymbol s}^m - {\boldsymbol t}^m\| \\
&\ge& \varepsilon_1 - \frac\kappa2 \|{\boldsymbol s}^m - {\boldsymbol s}\|^2 \\ &\ge& \big(1 - \frac\kappa2(1 + \kappa \varepsilon_1)^2 \varepsilon_1\big) \varepsilon_1 \\ &\ge& \varepsilon/3, \end{eqnarray*} when $\varepsilon$ is sufficiently small. We used \eqref{S-approx1} in the second inequality. We assume $r/\varepsilon$ is sufficiently small that $\varepsilon/3 \ge r_2 + r$, Then under $\Omega_1$, there are $j_1, j_2$ such that ${\boldsymbol s}_{j_m} \in B({\boldsymbol s}^m, r) \cap S_1$. By the triangle inequality, we then have that $\dist({\boldsymbol s}_{j_m}, S_1 \cap S_2) \ge \varepsilon/3 - r \ge r_2$, so that ${\boldsymbol s}_{j_m} \in R_m$, and \begin{eqnarray*}
\|{\boldsymbol s}_{j_1} - {\boldsymbol s}_{j_2}\| &\le& \|{\boldsymbol s}_{j_1} - {\boldsymbol s}^1\| + \|{\boldsymbol s}^1 - {\boldsymbol s}\| + \|{\boldsymbol s} - {\boldsymbol s}^2\| + \|{\boldsymbol s}^2 - {\boldsymbol s}_{j_2}\| \\ &\le& r + \varepsilon/3 + \varepsilon/3 + r \\ &=& \frac23 \varepsilon + 2 r \le \varepsilon, \end{eqnarray*} because $6 r \le 3 (r + r_1) \le \varepsilon$ by assumption.
Now, we show that the points sampled from $R_1$ form a connected subgraph. ($R_1$ is any connected component of $R$.) Take ${\boldsymbol s}^1, \dots, {\boldsymbol s}^M$ an $r$-packing of $R_1$, so that \[\bigcup_m (R_1 \cap B({\boldsymbol s}^m, r/2)) \subset R_1 \subset \bigcup_m (R_1 \cap B({\boldsymbol s}^m, r)).\]
Because $R_1$ is connected, $\cup_m B({\boldsymbol s}^m, r)$ is necessarily connected. Under $\Omega_1$, and $\Clref{cov-inter} \ge 2$, all the balls $B({\boldsymbol s}^m, r), m=1, \dots, M,$ contain at least one ${\boldsymbol s}_i \in S_1$, and any such point survives Step~3 since $\dist({\boldsymbol s}_i, R_1) < r$ by the triangle inequality. Two points ${\boldsymbol s}_i$ and ${\boldsymbol s}_j$ such that ${\boldsymbol s}_i,{\boldsymbol s}_j \in B({\boldsymbol s}^m, r)$ are connected, since $\|{\boldsymbol s}_i - {\boldsymbol s}_j\| \le 2 r \le \varepsilon$. And when $B({\boldsymbol s}^{m_1}, r) \cap B({\boldsymbol s}^{m_2}, r) \ne \emptyset$, ${\boldsymbol s}_i \in B({\boldsymbol s}^{m_1}, r)$ and ${\boldsymbol s}_j \in B({\boldsymbol s}^{m_2}, r)$ are such that $\|{\boldsymbol s}_i - {\boldsymbol s}_j\| \le 4 r \le \varepsilon$. Hence, the points sampled from $R_1$ are connected in the graph under $\Omega_1$.
We conclude that the nodes corresponding to $R$ that survive Step~3 are connected in the graph.
\subsubsection{Choice of parameters}
Aside from the constraints displayed in \eqref{choice}, we assumed in addition that \[ r < \frac1{\Clref{N-size} \kappa}, \quad r < \frac1{\Clref{sep} \kappa + 1}, \quad \eta < \frac1{4\Clref{intersect}}, \quad 3 (\Clref{sep}+3) r \le \varepsilon, \quad \varepsilon < \frac7{8 \kappa}, \] and that $\varepsilon$ was sufficiently small. Therefore, it suffices to choose the parameters as in \eqref{main} with a large-enough constant.
\subsection{Performance guarantees for \algref{proj}}
We keep the same notation and go a little faster here as the arguments are parallel. Let $d_i$ denote the estimated dimensionality at point ${\boldsymbol s}_i$, meaning the number of eigenvalues of ${\boldsymbol C}_i$ exceeding $\sqrt{\eta} \, \|{\boldsymbol C}_i\|$. Recall that ${\boldsymbol Q}_i$ denotes the orthogonal projection onto the top $d_i$ eigenvectors of ${\boldsymbol C}_i$. The arguments hinge on showing that, under $\Omega$, $d_i = d$ for all $i \in I_\star$ and that $d_i > d$ for $i$ such that $\dist({\boldsymbol s}_i, S_1 \cap S_2) \le r/C$, for some constant $C >0$.
Take $i\in I_\star$. Under $\Omega$, \eqref{keybound} holds, and applying Weyl's inequality \citep[Cor.~IV.4.9]{MR1061154}, we have \[
|\beta_m({\boldsymbol C}_i) - \beta_m(\bSigma_i) | \le \zeta r^2, \quad \forall m =1,\dots, D. \] By \lemref{unif-cov}, $\bSigma_i = c r^2 P_{T_i}$, so that $\beta_m(\bSigma_i) = c r^2$ when $m \le d$ and $\beta_m(\bSigma_i) = 0$ when $m > d$. Hence, \[ \beta_1({\boldsymbol C}_i) \le (c +\zeta) r^2, \qquad \beta_d({\boldsymbol C}_i) \ge (c -\zeta) r^2, \qquad \beta_{d+1}({\boldsymbol C}_i) \le \zeta r^2. \] This implies that \[ \frac{\beta_d({\boldsymbol C}_i)}{\beta_1({\boldsymbol C}_i)} \ge \frac{c-\zeta}{c+\zeta} > \sqrt{\eta}, \qquad \frac{\beta_{d+1}({\boldsymbol C}_i)}{\beta_1({\boldsymbol C}_i)} \le \frac{\zeta}{c+\zeta} < \sqrt{\eta}, \] when $\zeta \le \eta/2$ as in \eqref{choice} and $\eta$ is sufficiently small. When this is so, $d_i = d$ by definition of $d_i$.
Note that the top $d$ eigenvectors of $\bSigma_i$ generate $T_i$. Hence, applying the Davis-Kahan theorem, stated in \lemref{davis}, and \eqref{keybound} again, we get that \[
\|{\boldsymbol Q}_i - P_{T_i}\| \le \frac{\sqrt{2} \, \zeta r^2}{c r^2} = \zeta' := \sqrt{2} (d+2) \zeta, \quad \forall i \in I_\star. \] This is the equivalent of \eqref{keybound}, which leads to the equivalent of \eqref{decisive}: \[
\|{\boldsymbol Q}_i - {\boldsymbol Q}_j\| \le \frac1{cr^2} \|\bSigma_i - \bSigma_j\| + 2\zeta', \qquad \forall i,j \in I_\star, \] using the fact that $\bSigma_i = c r^2 P_{T_i}$. When $i,j \in I_\star$ are such that $K_i = K_j$, based on \eqref{eta-choice1}, we have \[
\|{\boldsymbol Q}_i - {\boldsymbol Q}_j\| \le 2 \kappa \varepsilon + 2\zeta'. \]
Hence, when $\eta > 2 \kappa \varepsilon + 2\zeta'$, two nodes $i, j \in I_\star$ such that $K_i = K_j$ and $\|{\boldsymbol s}_i - {\boldsymbol s}_j\| \le \varepsilon$ are neighbors in the graph. The arguments provided in \secref{same-cc} now apply in exactly the same way to show that nodes $i \in I_\star$ such that $K_i = 1$ belong to a single connected component in the graph, except for possible nodes near the intersection. The same is true of nodes $i \in I_\star$ such that $K_i = 2$.
Therefore, it remains to show that these two sets of nodes are not connected. When we take $i, j \in I_\star$ such that $K_i \ne K_j$, we have the equivalent of \eqref{eta-choice2S}, meaning, \[
\|{\boldsymbol Q}_i - {\boldsymbol Q}_j\| \ge \sin \theta_{\scriptscriptstyle S} - 2 \kappa (\Clref{sep}+1) \varepsilon - 2 \zeta'. \] We choose $\eta$ smaller than the RHS, so that these nodes are not neighbors in the graph.
We next prove that a node $i \in I_\star$ is not neighbor to a node near the intersection because of different estimates for the local dimension. Take ${\boldsymbol s} \in S_1$ such that $\delta({\boldsymbol s}) := \dist({\boldsymbol s}, S_2) < r$. We apply \lemref{cov-inter} and use the notation there until \eqref{delta} below, with the exception that we use ${\boldsymbol s}^2 = P_{S_2}({\boldsymbol s})$ and $T_{2(1)}$ to denote $T_{S_2}({\boldsymbol s}^2)$. Together with Weyl's inequality, we have \[ \beta_{d+1}({\boldsymbol C}({\boldsymbol s})) \ge \beta_{d+1}(\bSigma({\boldsymbol s})) - \Clref{cov-inter} r^{3}, \qquad \beta_{1}({\boldsymbol C}({\boldsymbol s})) \le \beta_1(\bSigma({\boldsymbol s})) + \Clref{cov-inter} r^{3}, \] which together with \lemref{eigen-inter} (and proper scaling), implies that \[ \frac{\beta_{d+1}({\boldsymbol C}({\boldsymbol s}))}{\beta_1({\boldsymbol C}({\boldsymbol s}))} \ge \frac{\frac{c}{8} (1-\cos \theta_{\rm max}(T_1,T_{2(1)}))^2 (1 - (\delta({\boldsymbol s})/r)^2)_+^{d/2+1} -\Clref{cov-inter} r^{3}}{c + (\delta({\boldsymbol s})/r) (1 - (\delta({\boldsymbol s})/r)^2)_+^{d/2} + \Clref{cov-inter} r^{3}}. \] Define ${\boldsymbol s}^0$, $T_1^0$ and $T_2^0$ as in \secref{different-cc}. Then, by the triangle inequality, \[ \theta_{\rm max}(T_1,T_{2(1)}) \ge \theta_{\rm max}(T_1^0,T_2^0) - \theta_{\rm max}(T_1,T_1^0) -\theta_{\rm max}(T_{2(1)},T_2^0). \] By definition, $\theta_{\rm max}(T_1^0,T_2^0) \ge \theta_{\scriptscriptstyle S}$, and by \lemref{T-diff}, \[
\theta_{\rm max}(T_1,T_1^0) \le 2 \asin\left(1 \wedge \frac\kappa2 \|{\boldsymbol s} - {\boldsymbol s}^0\|\right) \le C r, \] and similarly, \[
\theta_{\rm max}(T_{2(1)},T_2^0) \le 2 \asin\left(1 \wedge \frac\kappa2 \|{\boldsymbol s}^2 - {\boldsymbol s}^0\| \right) \le C r, \]
because $\|{\boldsymbol s} - {\boldsymbol s}^0\| \le C r$ and $\|{\boldsymbol s}^2 - {\boldsymbol s}^0\| \le r + \|{\boldsymbol s} - {\boldsymbol s}^0\| \le C r$. Hence, for $r$ small enough, $\theta_{\rm max}(T_1,T_{2(1)}) \ge \theta_{\scriptscriptstyle S}/2$, and furthermore, \begin{equation} \label{delta} \frac{\beta_{d+1}({\boldsymbol C}({\boldsymbol s}))}{\beta_1({\boldsymbol C}({\boldsymbol s}))} \ge \sqrt{\eta} \quad \text{ when } \quad 1 - \frac{\delta({\boldsymbol s})^2}{r^2} \ge \xi^{2/(d+2)}, \end{equation} where \[ \xi := \frac{9 (1 + 1/c) \sqrt{\eta}}{(1-\cos (\theta_{\scriptscriptstyle S}/2))^2}, \]
by the fact that $\eta \ge r$ in \eqref{choice}. The same is true for points on ${\boldsymbol s} \in S_2$ if we redefine $\delta({\boldsymbol s}) = \dist({\boldsymbol s}, S_1)$. Hence, for ${\boldsymbol s}_i$ close enough to the intersection that $\delta({\boldsymbol s}_i)$ satisfies \eqref{delta}, $d_i > d$. Then, by \lemref{P-diff}, $\|{\boldsymbol Q}_i - {\boldsymbol Q}_j\| = 1$ for any $j \in I_\star$. By our choice of $\eta < 1$, this means that $i$ and $j$ are not neighbors.
So the only way $\{i \in I_\star : K_i = 1\}$ and $\{i \in I_\star : K_i = 2\}$ are connected in the graph is if there are ${\boldsymbol s}_i \in S_1$ and ${\boldsymbol s}_j \in S_2$ such that $\|{\boldsymbol s}_i - {\boldsymbol s}_j\| \le \varepsilon$ and both $\delta({\boldsymbol s}_i)$ and $\delta({\boldsymbol s}_j)$ fail to satisfy \eqref{delta}. We now show this is not possible.
By \lemref{cov-inter}, we have \[
\|{\boldsymbol C}_i - \bSigma_i\| \le \Clref{cov-inter} \, r^{3}. \] By \eqref{Sigma} (and using the corresponding notation) and the triangle inequality \begin{eqnarray*}
\|\bSigma_i - \alpha_i c r^2 P_{T_i}\| &\le& c (1-\alpha_i) t_i^2 r^2 + \alpha_i (1-\alpha_i) \delta^2({\boldsymbol s}_i) \le 2 (1 - \alpha_i) r^2 \\ &\le& 2 (1 - (\delta({\boldsymbol s}_i)/r)^2)_+^{d/2} r^2 \le 2 \xi^{d/(d+2)} r^2 , \end{eqnarray*} where the very last inequality comes from $\delta({\boldsymbol s}_i)$ not satisfying \eqref{delta}. Hence, \[
\|{\boldsymbol C}_i - \alpha_i c r^2 P_{T_i}\| \le 2 \xi^{d/(d+2)} r^2 + \Clref{cov-inter} \, r^{3}, \] and since the $\beta_{d+1}(P_{T_i}) = 0$, by the Davis-Kahan theorem, we have \[
\|{\boldsymbol Q}_i - P_{T_i}\| \le \frac1{\alpha_i c r^2} \big[\xi^{d/(d+2)} r^2 + \Clref{cov-inter} \, r^{3} \big] \le C (\xi^{d/(d+2)} + r), \] and similarly, \[
\|{\boldsymbol Q}_j - P_{T_j}\| \le C (\xi^{d/(d+2)} + r). \]
By \lemref{P-diff}, $\|P_{T_i} - P_{T_j}\| = \sin \theta_{\rm max}(T_i, T_j)$. Let ${\boldsymbol s}^0 = P_{S_1 \cap S_2}({\boldsymbol s}_i)$, and define $T_1^0$ and $T_2^0$ as before. We have \[ \theta_{\rm max}(T_i, T_j) \ge \theta_{\rm max}(T_1^0, T_2^0) - \theta_{\rm max}(T_i, T_1^0) - \theta_{\rm max}(T_j, T_2^0) \ge \theta_{\scriptscriptstyle S} - C \varepsilon, \]
calling in \lemref{T-diff} as before, coupled with the fact that $\|{\boldsymbol s}_i - {\boldsymbol s}^0\| \le C \varepsilon$ and $\|{\boldsymbol s}_j - {\boldsymbol s}^0\| \le C \varepsilon$, since $\dist({\boldsymbol s}_i, S_2) \le \|{\boldsymbol s}_i - {\boldsymbol s}_j\| \le \varepsilon$ and \lemref{sep} applies, and then $\|{\boldsymbol s}_j - {\boldsymbol s}^0\| \le \|{\boldsymbol s}_i - {\boldsymbol s}^0\| + \|{\boldsymbol s}_j - {\boldsymbol s}_i\|$. Hence, assuming $\varepsilon$ is small enough, \begin{eqnarray*}
\|{\boldsymbol Q}_i - {\boldsymbol Q}_j\|
&\ge& \|P_{T_i} - P_{T_j}\| -\|{\boldsymbol Q}_i - P_{T_i}\| - \|{\boldsymbol Q}_j - P_{T_j}\| \\ &\ge& \sin(\theta_{\scriptscriptstyle S}/2) -C (\xi^{d/(d+2)} + r) > \eta, \end{eqnarray*} when $r$ and $\eta$ (and therefore $\xi$) are small enough. Therefore $i$ and $j$ are not neighbors, as we needed to show.
We conclude by remarking that, by choosing $C$ large enough in \eqref{main}, the resulting choice of parameters fits all our (often implicit) requirements.
\subsection{Noisy case} \label{sec:noise}
So far we only dealt with the case where $\tau = 0$ in \eqref{data-point}. When $\tau > 0$, a sample point ${\boldsymbol x}_i$ is in general different than its corresponding point ${\boldsymbol s}_i$ sampled from one of the surfaces. However, when $\tau/r$ is small, this does not change things much. For one thing, the points are close to each other, since we have $\|{\boldsymbol x}_i - {\boldsymbol s}_i\| \le \tau$ by assumption, and $\tau$ is small compared to $r$. And the corresponding covariance matrices are also close to each other. To see this, redefine $\Xi_i = \{j \neq i: {\boldsymbol x}_j \in N_r({\boldsymbol x}_i)\}$ and ${\boldsymbol C}_i$ as the sample covariance of $\{{\boldsymbol x}_j: j \in \Xi_i\}$. Let ${\boldsymbol D}_i$ denote the sample covariance of $\{{\boldsymbol s}_j: j \in \Xi_i\}$. Let $X$ be uniform over $\{{\boldsymbol x}_j : j \in \Xi_i\}$ and define $Y = \sum_j {\boldsymbol s}_j {\rm 1}\kern-0.24em{\rm I}_{\{X = {\boldsymbol x}_j\}}$. As in \eqref{CovXY}, we have \begin{eqnarray}
\|{\boldsymbol D}_i - {\boldsymbol C}_i\| &=& \|\operatorname{Cov}(X) - \operatorname{Cov}(Y)\| \notag \\
&\le& \operatorname{\mathbb{E}}\big[\|X - Y\|^2\big]^{1/2} \cdot \left(\operatorname{\mathbb{E}}\big[\|X - {\boldsymbol x}_i\|^2\big]^{1/2} + \operatorname{\mathbb{E}}\big[\|Y - {\boldsymbol x}_i\|^2\big]^{1/2}\right) \notag \\ &\le& \tau \cdot (r + r + \tau) = r^2 \big(2 \tau/r + (\tau/r)^2\big), \label{cov-noise} \end{eqnarray} which is small compared to $r^2$, which is the operating scale for covariance matrices in our setting.
Using these facts, the arguments are virtually the same, except for some additional terms due to triangle inequalities, for example, $\|{\boldsymbol s}_i - {\boldsymbol s}_j\| - 2 \tau \le \|{\boldsymbol x}_i - {\boldsymbol x}_j\| \le \|{\boldsymbol s}_i - {\boldsymbol s}_j\| + 2 \tau$. In particular, this results in $\zeta$ in \eqref{zeta} being now redefined as $\zeta = \frac{3\tau}r + t + \Clref{U-approx} \kappa r$. We omit further technical details.
\end{document}
|
arXiv
|
Youden's J statistic
Youden's J statistic (also called Youden's index) is a single statistic that captures the performance of a dichotomous diagnostic test. (Bookmaker) Informedness is its generalization to the multiclass case and estimates the probability of an informed decision.
Definition
Youden's J statistic is
$J={\text{sensitivity}}+{\text{specificity}}-1={\text{recall}}_{1}+{\text{recall}}_{0}-1$
with the two right-hand quantities being sensitivity and specificity. Thus the expanded formula is:
$J={\frac {\text{true positives}}{{\text{true positives}}+{\text{false negatives}}}}+{\frac {\text{true negatives}}{{\text{true negatives}}+{\text{false positives}}}}-1$
The index was suggested by W. J. Youden in 1950[1] as a way of summarising the performance of a diagnostic test, however the formula was earlier published in Science by C. S. Pierce in 1884.[2] Its value ranges from -1 through 1 (inclusive),[1] and has a zero value when a diagnostic test gives the same proportion of positive results for groups with and without the disease, i.e the test is useless. A value of 1 indicates that there are no false positives or false negatives, i.e. the test is perfect. The index gives equal weight to false positive and false negative values, so all tests with the same value of the index give the same proportion of total misclassified results. While it is possible to obtain a value of less than zero from this equation, e.g. Classification yields only False Positives and False Negatives, a value of less than zero just indicates that the positive and negative labels have been switched. After correcting the labels the result will then be in the 0 through 1 range.
Youden's index is often used in conjunction with receiver operating characteristic (ROC) analysis.[3] The index is defined for all points of an ROC curve, and the maximum value of the index may be used as a criterion for selecting the optimum cut-off point when a diagnostic test gives a numeric rather than a dichotomous result. The index is represented graphically as the height above the chance line, and it is also equivalent to the area under the curve subtended by a single operating point.[4]
Youden's index is also known as deltaP' [5] and generalizes from the dichotomous to the multiclass case as informedness.[4]
The use of a single index is "not generally to be recommended",[6] but informedness or Youden's index is the probability of an informed decision (as opposed to a random guess) and takes into account all predictions.[4]
An unrelated but commonly used combination of basic statistics from information retrieval is the F-score, being a (possibly weighted) harmonic mean of recall and precision where recall = sensitivity = true positive rate, but specificity and precision are totally different measures. F-score, like recall and precision, only considers the so-called positive predictions, with recall being the probability of predicting just the positive class, precision being the probability of a positive prediction being correct, and F-score equating these probabilities under the effective assumption that the positive labels and the positive predictions should have the same distribution and prevalence,[4] similar to the assumption underlying of Fleiss' kappa. Youden's J, Informedness, Recall, Precision and F-score are intrinsically undirectional, aiming to assess the deductive effectiveness of predictions in the direction proposed by a rule, theory or classifier. Markedness (deltaP) is Youden's J used to assess the reverse or abductive direction,[4][7] and matches well human learning of associations; rules and, superstitions as we model possible causation;[5] while correlation and kappa evaluate bidirectionally.
Matthews correlation coefficient is the geometric mean of the regression coefficient of the problem and its dual, where the component regression coefficients of the Matthews correlation coefficient are Markedness (inverse of Youden's J or deltaP) and informedness (Youden's J or deltaP'). Kappa statistics such as Fleiss' kappa and Cohen's kappa are methods for calculating inter-rater reliability based on different assumptions about the marginal or prior distributions, and are increasingly used as chance corrected alternatives to accuracy in other contexts. Fleiss' kappa, like F-score, assumes that both variables are drawn from the same distribution and thus have the same expected prevalence, while Cohen's kappa assumes that the variables are drawn from distinct distributions and referenced to a model of expectation that assumes prevalences are independent.[7]
When the true prevalences for the two positive variables are equal as assumed in Fleiss kappa and F-score, that is the number of positive predictions matches the number of positive classes in the dichotomous (two class) case, the different kappa and correlation measure collapse to identity with Youden's J, and recall, precision and F-score are similarly identical with accuracy.[4][7]
References
1. Youden, W.J. (1950). "Index for rating diagnostic tests". Cancer. 3: 32–35. doi:10.1002/1097-0142(1950)3:1<32::aid-cncr2820030106>3.0.co;2-3. PMID 15405679.
2. Pierce, C.S. (1884). "The numerical measure of the success of predictions". Science. 4 (93): 453–454. doi:10.1126/science.ns-4.93.453.b.
3. Schisterman, E.F.; Perkins, N.J.; Liu, A.; Bondell, H. (2005). "Optimal cut-point and its corresponding Youden Index to discriminate individuals using pooled blood samples". Epidemiology. 16 (1): 73–81. doi:10.1097/01.ede.0000147512.81966.ba. PMID 15613948.
4. Powers, David M W (2011). "Evaluation: From Precision, Recall and F-Score to ROC, Informedness, Markedness & Correlation". Journal of Machine Learning Technologies. 2 (1): 37–63. hdl:2328/27165.
5. Perruchet, P.; Peereman, R. (2004). "The exploitation of distributional information in syllable processing". J. Neurolinguistics. 17 (2–3): 97–119. doi:10.1016/s0911-6044(03)00059-9.
6. Everitt B.S. (2002) The Cambridge Dictionary of Statistics. CUP ISBN 0-521-81099-X
7. Powers, David M W (2012). The Problem with Kappa. Conference of the European Chapter of the Association for Computational Linguistics. pp. 345–355. hdl:2328/27160.
|
Wikipedia
|
Integrative analysis of somatic mutations and transcriptomic data to functionally stratify breast cancer patients
Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2015: genomics
Jie Zhang1,
Zachary Abrams1,
Jeffrey D. Parvin1 &
Kun Huang1
BMC Genomics volume 17, Article number: 513 (2016) Cite this article
Somatic mutations can be used as potential biomarkers for subtyping and predicting outcomes for cancer patients. However, cancer patients often carry many somatic mutations, which do not always concentrate on specific genomic loci, suggesting that the mutations may affect common pathways or gene interaction networks instead of common genes. The challenge is thus to identify the functional relationships among the mutations using multi-modal data. We developed a novel approach for integrating patient somatic mutation, transcriptome and clinical data to mine underlying functional gene groups that can be used to stratify cancer patients into groups with different clinical outcomes. Specifically, we use distance correlation metric to mine the correlations between expression profiles of mutated genes from different patients.
With this approach, we were able to cluster patients based on the functional relationships between the affected genes using their expression profiles, and to visualize the results using multi-dimensional scaling. Interestingly, we identified a stable subgroup of breast cancer patients that are highly enriched with ER-negative and triple-negative subtypes, and the somatic mutation genes they harbor were capable of acting as potential biomarkers to predict patient survival in several different breast cancer datasets, especially in ER-negative cohorts which has lacked reliable biomarkers.
Our method provides a novel and promising approach for integrating genotyping and gene expression data in patient stratification in complex diseases.
The initiation, development, and metastasis of cancers are complicated processes involving multi-cell, multi-tissue interactions and communications. Most cancers confer heterogeneity among patients that lead to different clinical outcomes such as survival time and response to treatment. With recent rapid advancement in next generation sequencing (NGS) technologies and computing capacity for processing and storing large data, more and more human cancer genomes have been characterized in a systematic way, bringing great opportunities for researchers to carry out integrative analysis to identify potential molecular markers for stratifying patients into subtypes with different predicted clinical outcomes [1]. Currently The Cancer Genome Atlas (TCGA) project harbors comprehensive data ranging from genomic sequences, genetic variants, transcriptomic and proteomic data to clinical data for multiple types of human cancer tissues as well as normal tissues. It is a great source for scientists to integrate data from different levels and mine the buried interaction among them, which will shed light on the understanding of cancer subtyping, prognosis as well as the cancer initiation and development [2–4].
In TCGA database, we often observe patients with a lot of somatic mutations that can significantly alter corresponding protein structures or functions of the genes they reside on (we named the affected gene as significantly mutated gene, or SMG). SMGs are the results of splice-site-change, nonsense, non-stop or frame-shift mutations. The prevalence of SMGs in almost all cancer types let us postulate that they may be potentially used as signatures for subtyping and outcome prediction, or as starting point to elucidate the tumorigenesis process. However, there is a big challenge in using SMGs for cancer patient stratification — the overlaps between the SMGs from different patients are usually small and the lists are usually not converging to common pathways [1, 5]. For instance, the breast cancer (BRCA) project in TCGA has identified three commonly mutated genes TP53, GATA3, and PI3KC but every patient has a much larger number of somatic mutations which cannot be easily summarized and compared even at the pathway level [1]. Therefore, it is of great interest in identifying the potential relationships between the mutated genes from different patients.
In this paper, instead of directly working on the gene lists, we propose to examine the functional relationships of the SMGs between different patients based on functional genomics data. One of such functional measurements is gene expression profile obtained from microarray or RNA-seq experiments, which has already been curated in TCGA. Specifically, given two sets of SMGs from two patients, we develop a method to establish the relationship between them based on expression profiles of the two gene lists.
Given a list of genes with their expression profiless measured in a cohort of patients, one way to characterize their roles is to examine how these genes lead to separation of the patients. In other words, we can establish a "patient network" using the difference of the expression levels of the genes as distance metric. Then given two gene lists, we can compare the similarity between the patient networks established by each of the lists. The similarity will provide pivotal information on the similarity between the roles of these two gene lists among the patients.
Mathematically, such similarity between patient networks can be computed using a recently developed metric called distance correlation [6]. Therefore in this paper, we develop a workflow for establishing the functional similarity among SMGs from different patients based on distance correlation. Our goal is trying to reveal the yet unknown links between different SMG, which indicate their functional relationships in the context of human gene interaction network, and use this relationship to stratify patients with different subtypes. While we demonstrate our approach using a breast cancer study, our method provides a novel promising approach of integrating genotype and gene expression data in patient stratification in complex diseases.
In this paper, we obtained whole genome exome-seq data (WES) from TCGA for the patients with breast cancers and derived the SMG list for each patient. The list of SMGs from each patient were used as features for this patient. We then computed distance correlation of every pair of SMG lists to obtain the functional relationships between the affected genes in different patients based on the gene expression profiles. The process yielded the distance correlation matrix across the patients. Then we visualized the patients by multi-dimensional scaling, and further clustered the patients into different groups. Our workflow is summarized in Fig. 1.
Workflow of identifying functional gene relationships using variants and transcriptomic data
The key component in this workflow is to compute the distance correlation between a pair of gene lists (in this case, expression profiles of two SMG lists from two patients). The intuition behind distance correlation can be considered as following: A gene list can be used to cluster the patient cohort of a heterogeneous disease, generating a clustering result. Two different gene lists will generate two results, and the results may be similar if the two gene lists play similar functional roles in the disease phenotype. The distance correlation measures the similarity of the two results.
In our case, we used the gene expression data (RNA-seq) of the entire cohort to compute the distance correlation, although theoretically, any gene expression dataset of a cohort with similar disease diversity can be used, and from a more general point of view, any type of data which present deep enough functional relationship among genes, even on normal people, can be used.
After we obtained the distance correlation matrix of any two SMG lists in the context of gene expression, which represents the functional relationship of any two sets of SMGs in the breast cancer disease gene expression, we use this matrix to cluster the entire breast cancer cohort, and the results should show a group of patients grouped by their common underlying perturbation resulted from seemingly different SMG lists.
The Cancer Genome Atlas (TCGA http://www.cancergenome.nih.gov) level-3 breast cancer patients' somatic mutation derived from WES and RNA-seq data were downloaded from TCGA data portal in July, 2013. Among all 876 available patients at the time of download, 445 have matching SMG and RNA-seq data. The data from these patients were chosen for further analysis. 83 normal breast sample RNA-seq level 3 data were also obtained from TCGA.
SMG selection
Somatic mutations derived from WES of the TCGA breast cancer patients were screened for significant mutation genes (SMG). SMG was defined as genes with frame-shift Indels, splice site change, non-stop mutation, or nonsense mutation. The mutation of mismatch, silent, RNA and in-frame indel were not included in SMG. For a specific group of patients, the number of SMG refers to the union of SMGs in that group of patients. For all the patients we analyzed in this study, their corresponding SMGs were listed in Additional file 1: Table S2.
Computing distance correlation
Distance correlation is a recently developed metric with two advantages [6]. First, it can be used to calculate the "correlation" between two matrices instead of just two vectors. Essentially it calculates the similarity of effects of two "feature sets" on separating the same set of samples. Secondly, unlike Pearson correlation that is based on a linear model, it can respond to nonlinear relationships. These properties make it a good candidate for our purpose when comparing relationships between two gene lists.
In this project, the distance correlation was computed using Matlab as described in [6]. Given two lists of SMGs \( {\mathit{\mathsf{g}}}_a \) and \( {\mathit{\mathsf{g}}}_b \) with n a and n b genes respectively, we first extract their gene expression matrices across N patients as
$$ {E}^a=\left[\begin{array}{ccc}\hfill {e}_1^a\hfill & \hfill \cdots \hfill & \hfill {e}_N^a\hfill \end{array}\right]\in {\mathrm{\Re}}^{n_a\times N}\mathrm{and}\kern0.5em {E}^b=\left[{e}_1^b\kern1em \cdots \kern1em {e}_N^b\right]\in {\mathrm{\Re}}^{n_b\times N}, $$
where e i j (i ∈ {a, b}, j ∈ {1, 2, …, N}) are n i - dimensional column vectors representing the expression profiles for the j-th patient over the i-th SMG list. The distance matrices among the patients for the two sets of SMGs can be calculated as
$$ {D}^a=\left[{d}_{jk}^a\right]\in {\mathrm{\Re}}^{N\times N}\mathrm{and}\kern0.5em {D}^b=\left[{d}_{jk}^b\right]\in {\mathrm{\Re}}^{N\times N} $$
with d i jk = ‖e i j − e i k ‖, i ∈ {a, b}, j, k = 1, 2, …, N. Let \( \overline{d_{j,\cdot}^i} \) and \( \overline{d_{\cdot, k}^i} \) be the average of the j-th row and k-th column for the matrix D i (i ∈ {a, b}) respectively. Also set \( \overline{d_{\cdot, \cdot}^i} \) be the grand average of all entries of D i (i ∈ {a, b}). Then set the centralized distance matrices to be
$$ \overline{D^i}=\left[\overline{d_{jk}^i}\right]=\left[{d}_{jk}^i-\overline{d_{j,\cdot}^i}-\overline{d_{\cdot, k}^i}+\overline{d_{j\cdot k}^i}\right]\in {\mathrm{\Re}}^{N\times N}\mathrm{with}\ i\ \in \left\{a,b\right\}. $$
Then the distance covariance between two distance matrices can be computed as
$$ dCov\left({E}^a,\ {E}^b\right)=\frac{1}{N^2}{\displaystyle {\sum}_{j,k=1}^N}\overline{d_{jk}^a}\cdot \overline{d_{jk}^b,} $$
and the distance correlation is defined as
$$ dCor\left({E}^a,\ {E}^b\right)=\frac{dCov\left({E}^a,{E}^b\right)}{\sqrt{dCov\left({E}^a,{E}^a\right)\cdot dCov\left({E}^b,\ {E}^b\right)}}. $$
For the 445 SMG lists obtained from the 445 patients, we compute the distance correlation matrix
$$ {D}_{dCor}=\left[ dCor\left({E}^i,{E}^j\right)\right]\in {\mathrm{\Re}}^{455\times 455},i,j=1,2,\dots, 445. $$
Multidimensional scaling and clustering
In order to visualize the distribution of the patients with the proximity measurements defined by the distance correlation matrix, we applied multidimensional scaling (MDS) to embed the data points (each point represents a patient) in 3D space. Specifically we used Matlab function cmdscale() with its default settings. The distance correlation matrix was first transformed to a dissimilarity matrix (using 1 − D dCor ) before MDS. K-means clustering was performed upon the patients using data using the same dissimilarity matrix. It was carried out using Matlab k-means function with default square-Euclidean distance and replicates of 50, K = 3 or 5.
Jaccard index computing
SMGs for every pair of patients in TCGA BRCA cohort were used to calculate the similarity between the two SMG lists using Jaccard index (J), which is defined as:
$$ \boldsymbol{J}=\frac{\left|\boldsymbol{A}\ {\displaystyle \cap}\boldsymbol{B}\right|}{\left|\boldsymbol{A}\ {\displaystyle \cup}\boldsymbol{B}\right|}, $$
where A and B are the two groups of SMGs from any pair of patients in the TCGA BRCA cohort. A∩B is the set of overlapping genes within the two SMG groups A and B, and A∪B is the union of these two groups.
For validation, NCBI GEO breast cancer dataset GSE1456 (containing 318 patients of mixed types) [7] as well as Netherlands Kanker Instituut (NKI) NKI-295 dataset (containing 295 patients of mixed types) were used [8]. These microarray datasets (and their specific subtypes) contain gene expression data and matching survival time (years) that are needed for survival analysis. Log-rank test was performed to determine the significance of difference in survival time between two patient groups and Kaplan-Meier curves were plotted.
Pathway analysis and gene query in TCGA database
Ingenuity Pathway Analysis (IPA) was used to analyze enriched biological functions and pathways in the identified SMGs. The prevalence of SMGs on other cancer types in TCGA database was generated using the cBioPortal online tools (http://www.cbioportal.org) [9].
We applied the above described workflow to analyze 445 breast cancer patients with matching SMG and RNA-seq data from TCGA. The distance correlation matrix was calculated and transformed. After MDS, the patients were imbedded into a 3D space for visualization, as shown in Fig. 2, with each point representing a patient.
K-means clustering on the embedded patients , revealing a subtype of breast cancer patients enriched with triple-negative patients. a: K = 3, Red: Group 1, Blue: Group 2, Green: Group 3. b: K = 5, Group 2 from panel A was further clustered into three groups (blue, magenta and red)
When the patients were clustered using the K-means clustering algorithm, we observed a distinctive group of patients as highlighted by the red circle in Fig. 2. The number of clusters is tested by checking the silhouette values and plots for different choice of K. The silhouette value reaches its high peak at K = 5 (data not shown) but this group is stable even when the number of clusters changed (e.g., K = 3 vs. 5). In addition, we inspected the silhouette plots and found that the clusters are more separated when K = 3. Thus we use K = 3 for most the rest analysis.
In order to test if the clustering of patients can be achieved using other methods or could be an artifact, we carried out three tests. First, we directly used the SMGs as features for the patients and the similarity among the patients were established by calculating the Jaccard indicies between every pair of patients. Out of all the 98,790 patient pairs, 96.2 % are zeros, which means they do not share any common genes. Thus using SMGs cannot effectively separate the patients. Secondly, we tested if using non-cancer gene expression data can lead to the same observation. As shown in Fig. 3a, there is no clear separation among the patients and the clusters obtained from K-means algorithm do not have any enrichment of specific subtypes of breast cancers when we used 83 normal breast tissue samples RNA-seq data instead of breast cancer data. Finally, we tested randomly selected "pseudo-SMGs" for the patients. Basically for each patient, we randomly select the same number of genes as her SMGs, applying the same workflow and the result is shown in Fig. 3b. Similar results are observed as in Fig. 3a.
The results of the distance correlation workflow on control data. a: Applying the workflow using normal breast gene expression data. The three groups from K-means clustering do not enrich specific subtypes of breast cancers. b: Applying the workflow on randomly selected "pseudo-SMGs". No subtype enriched patient cluster can be observed
In order to gain insight on this distinctive group of patients, we examined the status of the known molecular markers for breast cancers, namely estrogen receptor (ER), progesterone receptor (PR), and HER2. Statistical analysis revealed that this group is significantly enriched with ER-negative, PR-negative, HER2-negative, or triple negative breast cancer (ER-, PR-, HER2- or TNBC) patients. Specifically, while it contains 41 patients consisting only 9.2 % of the total cohort, it includes 34 % of the total TNBC patients (Table 1). To examine if this group can be differentiated easily from the cohort using other genes, we repeated the process using randomly selected "pseudo SMGs" of the same sizes for every patient. The clustering result was not able to separate the patients into groups with such enrichment of the ER- or TNBC patients. Since both ER- and TNBC patients are known to have worse prognosis then the ER+ patients, our further analysis was focused on this specific group, and we refer it as the "Group 1" in the rest of the paper.
Table 1 Statistical tests on the patient subtypes enriched in each group from K = 3 clustering results. No statistic test was performed for HER2 (and TN) status, due to the fact that more than 25 % patients do not contain HER2 status
Group 1 contains 201 SMGs that are specifically present in Group 1 patients (Fig. 4, Additional file 1: Table S1). Enrichment and pathway analysis using IPA showed that these SMGs are highly enriched with cancer-related genes, genes for embryonic development, cell morphology and organ development, indicating this group of genes are more involved in the early stage of cancer cell development and differentiation process (Fig. 5). Several upstream regulator drugs are found to regulate multiple genes in this group, among them, Ethinyl estradiol, an orally bioactive estrogen, regulates ABCB11, CCR7, CD97, CYP2D6, CYP7B1, SGK1, suggesting although being ER-negative, estrogen may still play a role in this group of patients; the drug, which is used to treat myelodsyplastic syndromes and acute myeloid leukemia, regulates BMP4, CCR7, MAGEC1, METAP2, MGMT, RARB, RARRES1, SGK1, SNRPN, and TGFBR2 [10, 11]. This may be a direction for future therapeutic research on this specific subtype of triple-negative breast cancer. Interestingly, the narcotic substance amphetamine regulates BMP4, DCC, SGK1, and TGFBR2.
Venn diagram showing the genes shared/unique among the three groups from K = 3 clustering results
Pathway analysis showing the top 10 biological functions enriched in the genes specifically to Group 1 isolated from K = 3 clustering
In addition, analysis using cBioPortal shows that the group of 201 SMG genes is found frequently altered (mutated, or contain copy number variance) in almost all types of cancers available in TCGA database (Fig. 6).
Group 1 specific genes are altered in multiple cancer types (TCGA data). AML: acute myeloid leukemia; ACC: adenoid cystic carcinoma; BC: bladder cancer; BUC: bladder urothelial carcinoma; BLGG: brain lower grade glioma; BIC: breast invasive carcinoma; CSCC &EAC: cervical squamous cell carcinoma & endocervical adenocarcinoma; GBM: glioblastoma multiforme; HNSCC: head & neck squamous cell carcinoma; KRCCC: kidney renal clear cell carcinoma; KRPCC: kidney renal papillary cell carcinoma; LAC: lung adenocarcinoma; LSCC: lung squamous cell carcinoma; OSCC: ovarian serous cystadenocarcinoma; Prostate AC: prostate adenocarcinoma; SCM: skin cutaneous melanoma; SAC: stomach adenocarcinoma; TC: thyroid carcinoma; UCEC: uterine corpus endometrial carcinoma; LHC: liver hepatic carcinoma; Pancreatic AC: pancreatic adenocarcinoma
We further tested if this unique group of 201 SMGs (Additional file 1: Table S1) or its subsets is associated with patient outcome (survival time to be specific in this paper) using multiple publicly available breast cancer gene expression data. The results are shown in Fig. 7. The subsets were selected based on the IPA pathway annotation. Our test on NKI data suggested that the 201 SMGs are able to separate patients (based on K-means algorithm with K = 2) into two groups with significant survival time difference but cannot effectively separate the ER-negative patients. The 201 SMGs can be clustered into several functional/pathway groups based upon gene enrichment analysis using Ingenuity Pathway Analysis (IPA®). Among these groups, we found that the group of 27 genes with embryonic development functions performed the best, which can separate the ER-negative breast cancer patients into two groups with significantly different survival times (Fig. 6 Middle). In addition, this 27-gene set can also separate patients in the other dataset (GSE1456) as shown in Fig. 7 Right. Given the high enrichment of ER-negative patients in the Group 1, these results suggest that the 27 genes may form the core of the Group 1 SMGs. As a comparison, the SMGs unique to Group 2 were not able to separate the ER-negative patients with significantly different survival outcomes, and it does not performs as good as Group 1 SMGs on general population survival test (data not shown).
Survival analysis using the Group 1 specific genes and its subset on separate breast cancer microarray data. Left: one NKI all cohort; Middle: on NKI ER-negative cohort; Right: on GSE1456 all cohort
With recently rapid development in next-generation sequencing technology and computing capacity, huge amount of data in different modalities for cancer specimens have been accumulated in an amazing speed in public databases. Therefore, integrating and mining these data becomes a major challenge in the bioinformatics field currently. In this work, we developed a novel approach to integrate genomic, transcriptomic and clinical data of cancer patients, specifically to compare somatic mutations of patients based on their functional relationships in the context of gene expression profiles, thus tackling the challenge of low overlapping of mutated genes among cancer patients. By introducing the distance correlation metric to directly measure the relationship between two sets of genes affected by somatic mutations, we not only can cluster the patients into different groups with different clinical subtypes, but also visualize the clusters and identify group specific mutations. The power of using distance correlation freed us from comparing only gene pairs, but directly comparing gene list to list. The distance correlation captures not only linear relationship of the two lists as Pearson correlation does, but also reveals non-linear relationship as well, which covers the biological interaction in far more and deeper extent.
Applying this approach on TCGA breast cancer patients reveals a group of patients who are mostly negative with one or more of the three breast cancer biomarkers (ER, PR, HER2) [12], and one third of the group are triple-negative subtype. Triple-negative breast cancer (TNBC) composes of 12–20 % of breast cancer patients [13]. It progresses more aggressively and does not respond well to hormone therapy. The rapid and aggressive progress of the disease course makes the prognosis of TNBC very poor [14] and the prediction difficult. After examining the group of patients we identified here, they harbor SMGs tightly interlinked each other and enriched with early stage cancer development. Among them, the 27 embryonic development genes form tight interaction networks as shown in the Fig. 8, and those genes can be used for breast cancer survival prognosis, especially for the poorly understood ER-negative cohort. TCGA database has not been curated long enough for this subtype of patients, therefore we did not test our findings on TCGA data. Instead, we chose two older GEO breast cancer microarray datasets. Unfortunately, the GEO datasets we tested does not contain enough TNBC patients, so we only tested on ER-negative cohort. The clustering results indicated that a portion of the triple negative patients maybe fundamentally different from the rest of the breast cancer patients due to the somatic mutations they harbor. Many of their genes shared common upstream regulators such as the drug for acute myeloid leukemia or estrogen, suggesting this group of people may benefit from other type of treatments that have not been administrated to TNBC patients. We suggested that the common upstream regulators and drugs interacting with these genes can provide insight on the development and treatment of TNBC patients. In addition, while among the 27 genes some of them are known to be associated with other cancers such as AFF1 [15], BMP4 [16], and TRIM24 [17], others such as MED27 is not widely know to be associated with cancers. Thus our work also generated new hypothesis on cancer related genes.
Group 1 genes enriched with embryonic development, organ development and morphology function (IPA)
In summary, a common challenge in studying complex diseases such as cancers is the lack of common genetic mutations among the patients. Besides pursuing commonly affected pathways, we provide a complementary approach for integrating the genotype data with transcriptome data to study the relationships between the genetic mutations at the functional level. While our main goal is on exploring the functional relationships of mutated gene groups, the identified genes may also serve as potential biomarkers for different subtypes of cancers. Currently due to the limitation of the data, we focus on the protein coding genes from the WES experiments.
In the near future, we plan to apply the same workflow to other cancer datasets in TCGA to further test the effectiveness of this method as well as identifying diseases in which such functional relationship can lead to meaningful stratification of the patients. With the cost of whole genome sequencing decreasing dramatically, it is expected that more somatic mutations on the non-coding regions and regulatory regions can be made available and the approach need to be expanded to accommodate such mutations.
BRCA, breast cancer; ER, estrogen receptor; GEO, gene expression omnibus; HER2, human epidermal growth factor receptor 2; IPA, ingenuity pathway analysis; MDS, multi-dimensional scaling; NCBI, National Center for Biotechnology Information; NGS, next generation sequencing; PR, progesterone receptor; RNA-seq, ribonucleic acid sequencing; SMG, significant mutant gene; TCGA, the cancer genome atlas; TNBC, triple-negative breast cancer; WES, whole genome exome sequencing
Cancer Genome Atlas Network. Comprehensive molecular portraits of human breast tumours. Nature. 2012, 490(7418): 61–70.
Ritchie MD, Holzinger ER, Li R, Pendergrass SA, Kim D. Methods of integrating data to uncover genotype–phenotype interactions. Nat Rev Genet. 2015;16(2):85–97.
Kristensen VN, Lingjærde OC, Russnes HG, Vollan HKM, Frigessi A, Børresen-Dale A-L. Principles and methods of integrative genomic analyses in cancer. Nat Rev Cancer. 2014;14(5):299–313.
Wang C, Machiraju R, Huang K. Breast cancer patient stratification using a molecular regularized consensus clustering method. Methods. 2014;67(3):304–12.
Jia P, Zheng S, Long J, Zheng W, Zhao Z. dmGWAS: dense module searching for genome-wide association studies in protein-protein interaction networks. Bioinformatics. 2011;27(1):95–102.
Székely GJ, Rizzo ML, Bakirov NK. Measuring and testing dependence by correlation of distances. Ann Stat. 2007;35(6):2769–94.
Pawitan Y, Bjöhle J, Amler L, Borg A-L, Egyhazi S, Hall P, Han X, Holmberg L, Huang F, Klaar S, Liu ET, Miller L, Nordgren H, Ploner A, Sandelin K, Shaw PM, Smeds J, Skoog L, Wedrén S, Bergh J. Gene expression profiling spares early breast cancer patients from adjuvant therapy: derived and validated in two population-based cohorts. Breast Cancer Res. 2005;7(6):R953–64.
van 't Veer LJ, Dai H, van de Vijver MJ, He YD, Hart AAM, Mao M, Peterse HL, van der Kooy K, Marton MJ, Witteveen AT, Schreiber GJ, Kerkhoven RM, Roberts C, Linsley PS, Bernards R, Friend SH. Gene expression profiling predicts clinical outcome of breast cancer. Nature. 2002;415(6871):530–6.
Gao J, Aksoy BA, Dogrusoz U, Dresdner G, Gross B, Sumer SO, Sun Y, Jacobsen A, Sinha R, Larsson E, Cerami E, Sander C, Schultz N. Integrative analysis of complex cancer genomics and clinical profiles using the cBioPortal. Sci Signal. 2013;6(269):11.
Putnik M, Zhao C, Gustafsson J-Å, Dahlman-Wright K. Global identification of genes regulated by estrogen signaling and demethylation in MCF-7 breast cancer cells. Biochem Biophys Res Commun. 2012;426(1):26–32.
Banerjee S, Bacanamwo M. DNA methyltransferase inhibition induces mouse embryonic stem cell differentiation into endothelial cells. Exp Cell Res. 2010;316(2):172–80.
Brenton JD, Carey LA, Ahmed AA, Caldas C. Molecular classification and molecular forecasting of breast cancer: ready for clinical application? J Clin Oncol. 2005;23(29):7350–60.
Anders CK, Carey LA. Biology, metastatic patterns, and treatment of patients with triple-negative breast cancer. Clin Breast Cancer. 2009;Suppl 2:S73–81.
Wahba HA, El-Hadaad HA. Current approaches in treatment of triple-negative breast cancer. Cancer Biol Med. 2015;12(2):106–16.
Srinivasan RS, Nesbit JB, Marrero L, Erfurth F, LaRussa VF, Hemenway CS. The synthetic peptide PFWT disrupts AF4-AF9 protein complexes and induces apoptosis in t(4;11) leukemia cells. Leukemia. 2004;18(8):1364–72.
Montesano R, Sarközi R, Schramek H. Bone morphogenetic protein-4 strongly potentiates growth factor-induced proliferation of mammary epithelial cells. Biochem Biophys Res Commun. 2008;374(1):164–8.
Ignat M, Teletin M, Tisserand J, Khetchoumian K, Dennefeld C, Chambon P, Losson R, Mark M. Arterial calcifications and increased expression of vitamin D receptor targets in mice lacking TIF1alpha. Proc Natl Acad Sci U S A. 2008;105(7):2598–603.
We thank Ohio Supercomputer Center for computing support. This work was partially funded by NCI U01 CA188547 grant. ZA was funded by NLM fellowship.
The publication costs for this article were funded by the corresponding author.
This article has been published as part of BMC Genomics Volume 17 Supplement 7, 2016: Selected articles from the International Conference on Intelligent Biology and Medicine (ICIBM) 2015: genomics. The full contents of the supplement are available online at http://bmcgenomics.biomedcentral.com/articles/supplements/volume-17-supplement-7.
All datasets used in this study were publicly available from the website described in the Methods section.
KH conceived of the study. JZ, ZA and JP collected the data. JZ performed the computational coding and conducted data analysis. JZ and KH drafted the manuscript, JP participated the design of the method. All authors read and approved the final manuscript.
Department of Biomedical Informatics, The Ohio State University, Columbus, OH, 43210, USA
Jie Zhang, Zachary Abrams, Jeffrey D. Parvin & Kun Huang
Zachary Abrams
Jeffrey D. Parvin
Kun Huang
Correspondence to Kun Huang.
Additional file
Supplementary tables. This file contain two tables, the first table contain the SMGs in group 1 patients and their mutation frequencies among group 1 patients. The second table contain the patient IDs and their corresponding SMGs from TCGA BRCA. (DOCX 117 kb)
Zhang, J., Abrams, Z., Parvin, J.D. et al. Integrative analysis of somatic mutations and transcriptomic data to functionally stratify breast cancer patients. BMC Genomics 17 (Suppl 7), 513 (2016). https://doi.org/10.1186/s12864-016-2902-0
Distance correlation
Breast cancer patient stratification
Functional analysis of somatic mutation
Integrative analysis
Submission enquiries: [email protected]
|
CommonCrawl
|
DOI:10.1038/ncb1065
RGS16 inhibits signalling through the Gα13–Rho axis
@article{Johnson2003RGS16IS,
title={RGS16 inhibits signalling through the G$\alpha$13–Rho axis},
author={Eric N. Johnson and Tammy M. Seasholtz and Abdul A. Waheed and Barry Kreutz and Nobuchika Suzuki and Tohru Kozasa and Teresa L. Z. Jones and Joan Heller Brown and Kirk M. Druey},
journal={Nature Cell Biology},
volume={5},
pages={1095-1103}
Eric N. Johnson, T. M. Seasholtz, +6 authors K. Druey
Nature Cell Biology
Gα13 stimulates the guanine nucleotide exchange factors (GEFs) for Rho, such as p115Rho-GEF. Activated Rho induces numerous cellular responses, including actin polymerization, serum response element (SRE)-dependent gene transcription and transformation. p115Rho-GEF contains a Regulator of G protein Signalling domain (RGS box) that confers GTPase activating protein (GAP) activity towards Gα12 and Gα13 (ref. 3). In contrast, classical RGS proteins (such as RGS16 and RGS4) exhibit RGS domain…
View on Nature
Figures from this paper
Application of RGS box proteins to evaluate G-protein selectivity in receptor-promoted signaling.
M. Hains, D. Siderovski, T. K. Harden
Methods in enzymology
The use of specific RGS domain constructs to discriminate among G(i/o), Gq-and G(12/13)-mediated activation of phospholipase C (PLC) isozymes in COS-7 cells is described.
R4 RGS proteins: regulation of G-protein signaling and beyond.
G. Bansal, K. Druey, Z. Xie
Pharmacology & therapeutics
This review highlights recent advances in the understanding of the physiological functions of one subfamily of RGS proteins with a high degree of homology (B/R4) gleaned from recent studies of knockout mice or cells with reduced RGS expression.
Regulation of RhoGEF proteins by G12/13‐coupled receptors
S. Siehler
An overview of G 12/13 signalling of GPCRs with a focus on RhoGEF proteins as the immediate mediators of G12/13 activation is given, which provides novel therapeutic approaches for cancer, cardiovascular diseases, arterial and pulmonary hypertension, and bronchial asthma.
Multi-tasking RGS proteins in the heart: the next therapeutic target?
Evan L Riddle, Raúl A Schwartzman, M. Bond, P. Insel
RGS proteins are regulators of cardiovascular physiology and potentially novel drug targets as well because of the importance of GPCR-signaling pathways and the profound influence of RGS proteins on these pathways.
Gα13 regulates MEF2-dependent gene transcription in endothelial cells: role in angiogenesis
G. Liu, Jingyan Han, J. Profirovic, E. Strekalova, T. Voyno-Yasenetskaya
It is shown that myocyte-specific enhancer factor-2 (MEF2) mediates G α13-dependent angiogenesis and that MEF2 proteins are an important component in Gα13-mediatedAngiogenesis.
The superfamily of "regulator of g-protein signaling" (rgs) proteins
Melinda D. Willard, F. Willard, D. Siderovski
A large family of seven transmembrane-domain receptors for hormones, neurotransmitters, growth factors, chemoattractants, light, odorants, and other extracellular stimuli promote intracellular signaling responses by activation of heterotrimeric G proteins, which include additional functionalities beyond their hallmark capacity to act as GAPs for Gα subunits.
Regulators of G-protein-signaling proteins: negative modulators of G-protein-coupled receptor signaling.
G. Woodard, I. Jardin, A. Berna-Erro, G. Salido, J. Rosado
International review of cell and molecular biology
Human and animal studies have revealed that RGS proteins play a vital role in physiology and can be ideal targets for diseases such as those related to addiction where receptor signaling seems continuously switched on.
Protein kinase inhibitor β enhances the constitutive activity of G-protein-coupled zinc receptor GPR39.
Z. Kovacs, T. Schacht, Ann-Kathrin Herrmann, P. Albrecht, Konstantinos Lefkimmiatis, A. Methner
The Biochemical journal
Mutation of this domain abolished the inhibitory activity of PKIB on protein kinase A activity, but had no effect on the interaction with GPR39, cell protection and induction of SRE-dependent transcription.
Regulator of G‐Protein Signaling 16 Is a Negative Modulator of Platelet Function and Thrombosis
Keziah R. Hernandez, Z. Karim, H. Qasim, K. Druey, F. Alshbool, F. Khasawneh
Journal of the American Heart Association
A critical role for R GS16 in regulating hemostatic and thrombotic functions of platelets in mice is supported and RGS16 represents a potential therapeutic target for modulating platelet function.
G12/13-dependent signaling of G-protein-coupled receptors: disease context and impact on drug discovery
Expert opinion on drug discovery
The review gives an overview of the present understanding of the G12/13-related biology of GPCR and finds many GPCRs were found to couple to G 12/13 proteins in addition to coupling to one or more other types of G proteins.
Leukemia-Associated Rho Guanine Nucleotide Exchange Factor Promotes Gαq-Coupled Activation of RhoA
M. A. Booden, D. Siderovski, C. Der
These findings suggest that the RhoA exchange factor LARG, unlike the related p115 Rho GEF and PDZ-RhoGEF proteins, can serve as an effector for Gq-coupled receptors, mediating their functional linkage to RHoA-dependent signaling pathways.
Functional Characterization of the G Protein Regulator RGS13*
Eric N. Johnson, K. Druey
The characterization of R GS13, the smallest member of the RGS family, which has been cloned from human lung, suggests that RGS13 may regulate Gαi-, Gαq-, and Gαs-coupled signaling cascades.
The GTPase-activating Protein RGS4 Stabilizes the Transition State for Nucleotide Hydrolysis*
D. M. Berman, T. Kozasa, A. Gilman
RGS4 stabilizes the transition state for GTP hydrolysis, as evidenced by its high affinity for the GDP-AlF4−-bound forms of Goα and Giα and its relatively low affinity forThe GTPγS- and GDP- bound forms of these proteins.
A Rho Exchange Factor Mediates Thrombin and Gα12-induced Cytoskeletal Responses*
M. Majumdar, T. M. Seasholtz, C. Buckmaster, D. Toksoz, J. Brown
The Gα12 protein family is identified as transducers of thrombin signaling to the cytoskeleton and the first evidence that a Rho-GEF transduces signals between G protein-coupled receptors and RHo-mediated cytoskeletal responses is provided.
Palmitoylation Regulates Regulators of G-protein Signaling (RGS) 16 Function
Abel Hiol, P. C. Davey, +6 authors T. L. Jones
Results suggest that the amino-terminal palmitoylation of an RGS protein promotes its lipid raft targeting that allows palmitoyslation of a poorly accessible cysteine residue that was critical for RGS16 and RGS4 GAP activity.
The regulators of G protein signaling (RGS) domains of RGS4, RGS10, and GAIP retain GTPase activating protein activity in vitro.
S. Popov, K. Yu, T. Kozasa, T. Wilkie
Chemistry, Biology
It is demonstrated that the RGS domains of RGS4, RGS10, and GAIP retain GTPase accelerating activity with the Gi class substrates Gialpha1, Goalpha, and Gzalpha in vitro.
The role of Rho in G protein-coupled receptor signal transduction.
V. Sah, T. M. Seasholtz, S. Sagi, J. Brown
Annual review of pharmacology and toxicology
G alpha 12/13 can bind and activate Rho-specific guanine nucleotide exchange factors, providing a mechanism by which GPCRs that couple to G alpha12/13 could activate RHo and its downstream responses.
Src-mediated RGS16 Tyrosine Phosphorylation Promotes RGS16 Stability*
A. Derrien, B. Zheng, +4 authors K. Druey
It is shown that endogenous RGS16 is phosphorylated after epidermal growth factor stimulation of MCF-7 cells, and results suggest that Src mediates R GS16 tyrosine phosphorylation, which may promote RGS 16 stability.
GTPase-activating proteins for heterotrimeric G proteins: regulators of G protein signaling (RGS) and RGS-like proteins.
E. Ross, T. Wilkie
Annual review of biochemistry
GAP activity can sharpen the termination of a signal upon removal of stimulus, attenuate a signal either as a feedback inhibitor or in response to a second input, promote regulatory association of other proteins, or redirect signaling within a G protein signaling network.
p115 RhoGEF, a GTPase activating protein for Gα12 and Gα13
T. Kozasa, Xuejun Jiang, +5 authors P. Sternweis
Members of the regulators of G protein signaling (RGS) family stimulate the intrinsic guanosine triphosphatase (GTPase) activity of the α subunits of certain heterotrimeric guanine nucleotide–binding…
|
CommonCrawl
|
How to prevent viremia rebound? Evidence from a PRRSv data-supported model of immune response
Natacha Go ORCID: orcid.org/0000-0002-9638-90781,2,3,
Suzanne Touzeau2,4 na1,
Zeenath Islam3,
Catherine Belloc1 &
Andrea Doeschl-Wilson3 na1
BMC Systems Biology volume 13, Article number: 15 (2019) Cite this article
Understanding what determines the between-host variability in infection dynamics is a key issue to better control the infection spread. In particular, pathogen clearance is desirable over rebounds for the health of the infected individual and its contact group. In this context, the Porcine Respiratory and Reproductive Syndrome virus (PRRSv) is of particular interest. Numerous studies have shown that pigs similarly infected with this highly ubiquitous virus elicit diverse response profiles. Whilst some manage to clear the virus within a few weeks, others experience prolonged infection with a rebound. Despite much speculation, the underlying mechanisms responsible for this undesirable rebound phenomenon remain unclear.
We aimed at identifying immune mechanisms that can reproduce and explain the rebound patterns observed in PRRSv infection using a mathematical modelling approach of the within-host dynamics. As diverse mechanisms were found to influence PRRSv infection, we established a model that details the major mechanisms and their regulations at the between-cell scale. We developed an ABC-like optimisation method to fit our model to an extensive set of experimental data, consisting of non-rebounder and rebounder viremia profiles. We compared, between both profiles, the estimated parameter values, the resulting immune dynamics and the efficacies of the underlying immune mechanisms. Exploring the influence of these mechanisms, we showed that rebound was promoted by high apoptosis, high cell infection and low cytolysis by Cytotoxic T Lymphocytes, while increasing neutralisation was very efficient to prevent rebounds.
Our paper provides an original model of the immune response and an appropriate systematic fitting method, whose interest extends beyond PRRS infection. It gives the first mechanistic explanation for emergence of rebounds during PRRSv infection. Moreover, results suggest that vaccines or genetic selection promoting strong neutralising and cytolytic responses, ideally associated with low apoptotic activity and cell permissiveness, would prevent rebound.
One of the biggest challenge in infection control is dealing with heterogeneity in host response to infection. Uniphasic vs. multiphasic infection dynamics are of a particular interest given their potential consequences on the population dynamics and efficacies of control strategies (vaccination, genetic selection). Multiphasic infection profiles have been reported for various infections such as Influenza, HIV, Hepatitis B and C, as well as Porcine Respiratory and Reproductive Syndrome (PRRS). They can occur during natural infection (HIV [1], equine Influenza [2], PRRSv [3]), under drug therapy [4, 5] or co-infection [6, 7]. In the majority of cases, the underlying causes for multiphasic infection profiles are subject to much speculation [6, 8–12].
In this context, infection by PRRS virus (PRRSv), is of particular interest. It not only constitutes a major concern for the swine industry, responsible for significant economic losses worldwide [13, 14], but also elicits a highly diverse host response, that may contribute to the experienced difficulty in eliminating this disease despite tremendous control efforts [15–18]. Rebounders (i.e. individuals exhibiting a biphasic infection profile) have been reported for various PRRS viral strains and pig breeds[3, 11, 19, 20]. In particular, a large scale challenge experiment conducted by the Porcine Host Genetic Consortium (PHGC), in which almost 2000 pigs from various cross-breeds were infected with the same dose of a virulent PRRSv strain, revealed that around 20% of pigs exhibited viremia rebound within 6 weeks post infection, demonstrating that this phenomenon is genuine (i.e. not a simple measurement error) and common [19]. A previous study on this data set showed that the infection severity differed depending on the pig genotype; moreover, a higher proportion of rebounder pigs carried the genotype associated with severe infection [21, 22]. These results suggest that viremia rebound could be due to a genetic factor, that would lead to variable immune responses. So viremia rebound could be determined by immune mechanisms. However, mechanisms responsible for the emergence of rebound remain unclear [9, 11, 12, 19, 21, 23].
PRRSv targets antigen presenting cells (macrophages and dendritic cells), key components of the innate immune response, and hence alters the innate and the subsequent adaptive immune responses in complex ways. It induces a prolonged viremia due to its ability to hamper the whole immune response, mostly characterised by high pro-inflammatory and immuno-modulatory responses, a low antiviral response, a weak and delayed cellular response, as well as a significant but inefficient humoral response [13, 14, 24]. Moreover, infection and immune dynamics are highly variable among hosts and viral strains. Depending on experimental studies, various components of the immune response have been highlighted as having an impact on the severity and duration of PRRSv infection. The main ones are: (i) the target cell permissiveness and viral replication rate; (ii) the levels of antiviral cytokines (TNFα, IFNα and IFNγ) and immuno-regulatory cytokines, the latter being either pro-cellular (IL12 and IFNγ) or pro-humoral (IL10 and TGFβ); (iii) the orientation of the adaptive response towards the cellular (Cytotoxic T Lymphocytes and IFNγ), humoral (antibodies and IL10), or regulatory (TGFβ and IL10) response [reviews: 15–17, 25]. The aim of our study was to identify which of these immune mechanisms can reproduce and explain the rebound patterns observed in PRRSv infection dynamics. For this purpose we adopted a mechanistic modelling approach of the within-host infection dynamics.
Given the large spectrum of immune mechanisms found to influence PRRSv infection dynamics details in [26], Chap. 1, a sufficiently comprehensive representation of the multiplex immune response was required to avoid preliminary bias. This was achieved by extending an integrative model of the viral and immune component dynamics within the host representing immune mechanisms at the between-cell scale [27]. The resulting model, based on knowledge from in vitro and in vivo experimental studies on PRRSv, provides an explicit and detailed representation of: (i) the innate immune mechanisms, in particular the interactions between the virus and its target cells; (ii) the activation and orientation of the adaptive response towards the cellular, humoral or regulatory response; and (iii) the main cytokines and their complex regulations of the innate and adaptive immune mechanisms.
We fitted our within-host model to a viremia data sub-set from ([19], smoothed PHGC data). Due to the high number of model parameters, we were faced with a potential identifiability issue, preventing us from obtaining unique parameter estimates associated with each individual. However, experimental studies show that hosts challenged with the same inoculum may exhibit different immune responses and that contrasting immune responses can result in similar viremia profiles (reviewed in [26], Chap. 1). So our aim was to identify parameter sets that generate data-compatible uniphasic and biphasic viremia profiles, rather than one unique parameter set for each individual viremia profile. To do so, we developed an Approximate Bayesian Computation (ABC)-like fitting procedure, which allows an extensive exploration of a high-dimensional parameter space and is computationally less expensive than standard ABC method. This procedure resulted in the selection of two viremia sets representing the between-host variability for uniphasic and biphasic profiles respectively. We first examined the corresponding immune dynamics to characterise the response associated with the biphasic viremia profile. We then compared, between both viremia profiles, the set of estimated parameter values and the efficacies of immune mechanisms which are assumed to drive the viral dynamics. This led to the identification of discriminant candidate mechanisms, which we further explored with regards to their ability to either trigger or prevent virus load rebound. Hence, using an original model of the immune response and an appropriate systematic fitting method, the paper provides the first mechanistic explanation for PRRS viremia rebound, and possibly also for other virus infections.
Figure 1 shows a functional diagram of the mathematical model, which describes the evolution over time of the concentration of 19 state variables: the virus (V); the naive (Tn), mature non-infected (Tm) and mature infected (Ti) antigen presenting cells, which are the virus target cells; the natural killers (NK); the type 1 helper T cells (cellular effectors Ec); the type 2 helper T cells (humoral effectors Eh); the regulatory T cells (regulatory effectors Er); the cytotoxic T lymphocytes (CTL); the plasma cells (B); the neutralising antibodies (nAb); the pro-inflammatory cytokines (Pi, which groups IL1β, IL6, IL8 & CCL2); the antiviral cytokines (TNFα, IFNα, IFNγ); the immuno-modulatory cytokines (IL10, TGFβ); the pro-cellular regulatory cytokines (IFNγ, IL12); the pro-humoral regulatory cytokines (IL4, IL6); the pro-regulatory cytokine (TGFβ).
Functional diagram of the model representing the within-host immune response to PRRSv infection. Functional diagram of the model representing the within-host immune response to PRRSv infection. Binding of PRRS viral particles (V) and naive target cells (Tn) either result in mature and non-infected cells (Tm) that phagocytes viral particles, or in mature and infected cells (Ti) that allows viral replication and excretion of new viral particles. Phagocytosis is amplified by antiviral cytokines (TNFα, IFNα, IFNγ) and inhibited by immuno-modulatory (IL10, TGFβ) cytokines; on the contrary, infection and viral replication are inhibited by antiviral cytokines and amplified by immuno-modulatory cytokines. TNFα induces the apoptosis of Tn, Tm and Ti. Viral particles are neutralised by antibodies (nAb); infected cells are cytolysed by natural killers (NK) and Cytotoxic T Lymphocytes (CTL). Mature target cells (Tm and Ti) synthesise cytokines and present the viral antigen to naive adaptive effectors (En, not explicitly represented in the model). Once activated, they differentiate into cellular (Ec), humoral (Eh) or regulatory (Er) effectors, depending on the cytokinic environment. Pro-cellular regulatory cytokines (IFNγ, IL12) favour Ec, whereas pro-humoral regulatory cytokines (IL4, IL6) Eh and pro-regulatory regulatory cytokines (TGFβ) Er. These effectors synthesise cytokines and induce the activation of plasma cells (B), which synthesise nAb. Moreover, Ec induce the activation of CTL. Finally, the recruitment of Tn and NK is amplified by pro-inflammatory cytokines (Pi, which groups IL1β, IL6, IL8 and CCL2). Colours – green: PRRSv particles; red: innate response; blue: adaptive response; purple: both innate and adaptive responses. Lines – plain with arrow: state changes; dashed (dotted) with arrow: (cytokine) syntheses; plain dark grey with ⊕: up-regulations by cytokines; plain light grey with ⊖: down-regulations by cytokines
Fitting the model to viremia data (from [19, smoothed PHGC data]) produced a wide spectrum of uniphasic and biphasic viremia profiles. For each profile, we identified 625 parameter sets, referred as "individuals", whose viremia characteristics, i.e. infection durations, peaks and peak dates matched the viremia data (Fig. 2a & b). Differences between simulated and experimental data were observed (i) at the first few days post infection, where the model tended to predict a faster rise to peak viremia, and (ii) at the later stage of infection, where simulated biphasic profiles tended to experience a lower and later second peak than suggested by the data. Such relatively minor discrepancies are expected, given the adopted level of model complexity, and partly originate from the fact that viremia data were only sampled 8 times over 42 days, with the first sample on day 4; furthermore, experimental data were smoothed using Wood's functions [19].
Fitted viremia over infection time for the uniphasic and biphasic profiles. Fitted viremia over infection time for the uniphasic and biphasic profiles. a-b Comparison between the 625 fitted individuals and the smoothed PHGC data (lower and upper envelopes in black, [19]) for the a uniphasic (green) and b biphasic (red) profiles. Black boxes: data ranges for the first viral peak, the rebound peak (max) and the minimum between the two peaks (min). c-d Comparison between the 35 representative individuals (lines) and the 625 fitted individuals from which the 35 were sampled (shaded area) for the c uniphasic (green) and d biphasic (red) profiles. Viremia detection threshold (horizontal dashed line). Semi-log graphs
Among the 625 individuals of each profile, parameter values were not uniformly distributed. In order to capture the full range of parameters associated with each profile without sampling bias, we used a k-means clustering method to generate a representative sample and obtained 35 individuals per viremia profile (Fig. 2c & d, see "Selection of representative individuals for both viremia profiles" section for the clustering method).
Characterisation of the immune response associated with the biphasic viremia profile
Individuals with uniphasic and biphasic viremia profiles also had, respectively, uniphasic and biphasic profiles for most of the immune components (Additional file 1). The main characteristics that discriminate between the biphasic and uniphasic profiles, illustrated in Figs. 3 and 4, are listed below.
Immune components discriminating between uniphasic and biphasic viremia profiles over infection time. Immune components discriminating between uniphasic and biphasic viremia profiles over infection time. Mean value (solid line) and standard deviation (shaded area) of the 35 representative individuals selected for the uniphasic (green) and biphasic (red) viremia profiles. Semi-log graphs. ∗p-value <5% when comparing uniphasic and biphasic profiles (permutation tests over four time periods: 0–10, 11–20, 21–31, 32–42 days)
Relative cytokine levels discriminating between uniphasic and biphasic viremia profiles over infection time. Relative cytokine levels discriminating between uniphasic and biphasic viremia profiles over infection time. Percentage of a antiviral cytokines (IFNγ,IFNα,TNFα) among antiviral and immuno-modulatory (IL10,TGFβ) cytokines; b IFNγ and c IFNα among antiviral cytokines; d pro-cellular (IL12,IFNγ) and e pro-humoral (IL4,Pi) cytokines over the pro-cellular, pro-humoral and pro-regulatory (TGFβ) cytokines. Mean value (solid line) and standard deviation (shaded area) of the 35 representative individuals selected for the uniphasic (green) and biphasic (red) profiles; Balanced contribution level (horizontal dashed line). ∗p-value <5% when comparing uniphasic and biphasic profiles (permutation tests over four time periods: 0–10, 11–20, 21–31, 32–42 days)
1 Higher immune response activation. The immune response activation is a global indicator of both the severity of infection and the host ability to counter infection. It is reflected by infected cell (as PRRSv targets the antigen presenting cells) and pro-inflammatory cytokine levels for the innate response, and by total helper T cell levels for the adaptive response. Biphasic profiles were associated with higher levels for these three immune components over the whole time window (Fig. 3a-c). In particular, these differences were significant for infected cells over the whole time window (Fig. 3a).
2 Stronger depletion of naive target cells. Infection causes a temporary reduction of naive target cells, which reduces both cell infection and immune functions of antigen-presenting cells (APC), as PRRSv targets APC. Levels of naive target cells were significantly lower for biphasic profiles until day 20 (Fig. 3d); the minimum was reached significantly earlier for biphasic profiles (test results not shown).
3 Early predominance of antiviral vs. immuno-modulatory cytokines. Immuno-modulatory cytokines (IL10, TGFβ) inhibit numerous immune functions and promote the target cell permissiveness while antiviral cytokines (TNFα, IFNα, IFNγ) are key inhibitors of the viral multiplication. Levels of IL10 were lower for biphasic profiles until day 20 (Fig. 3e), whereas TNFα and IFNα levels were higher (Fig. 3f-g). Differences in IFNα levels between profiles were particularly marked over the whole time period (Fig. 3g). These results suggest a predominant antiviral response at the earlier infection stage for biphasic profiles, which was confirmed by comparing the proportion of antiviral vs. immuno-modulatory cytokines (Fig. 4a). Furthermore, for biphasic profiles, antiviral cytokines were initially (i.e. first week post infection) dominated by IFNα and then by IFNγ, whereas IFNγ always dominated for uniphasic profiles (Fig. 4b-c). Compared to IFNα and IFNγ, TNFα was consistently relatively low for both profiles. IL10 was the predominant immuno-modulatory cytokine for the whole infection period and for both profiles (Additional file 1 R & S).
4 Weaker cytotoxic and neutralisation adaptive responses. Adaptive cytotoxic response, mediated by Cytotoxic T Lymphocytes, and neutralisation response, mediated by neutralising antibodies, are key immune mechanisms to counter viral infections. Levels of cytotoxic lymphocytes were lower for biphasic profiles during almost the whole time window (Fig. 3h). Levels of neutralising antibodies were negligible for both profiles during the earlier infection stage and significantly lower for biphasic profiles from day 10 (Fig. 3i). Furthermore, the adaptive response orientation was highly variable within each profile and on average orientated towards the humoral response for both profiles over the whole time window (Fig. 4d-e).
These four immune characteristics associated with biphasic viremia profiles can result from various interacting immune mechanisms with complex cytokine regulations. For instance, depletion of naive target cells (Characteristic 2) can be due to low recruitment of permissive target cells controlled by pro-inflammatory cytokines, high cellular decay amplified by TNFα, high cell infection or phagocytosis regulated by antiviral and immuno-modulatory cytokines. Therefore, a deeper exploration into the underlying mechanisms for these discriminant immune characteristics was required.
Identification of immune mechanisms responsible for the biphasic viremia profile
The baseline rate (i.e. model parameter,) of a given immune mechanism defines the host ability to carry out the corresponding immune function. Hence, comparing the estimated baseline rates between both viremia profiles can provide valuable information to identify the critical immune mechanisms responsible for biphasic profiles.
Nevertheless, immune components interact in complex ways involving more or less direct regulation loops via cytokines (Fig. 1). Consequently, the baseline rate (e.g. cell infection rate) does not necessarily reflect the efficacy of the mechanism (e.g. cell infection efficacy: total number of cells infected over total number of naive target cells recruited) or the dynamics of the corresponding immune component (e.g. level of infected cells over time). Therefore, the identification of critical immune mechanisms responsible for biphasic viremia profiles cannot be based on parameter values alone. Consequently, we also compared the efficacy of the mechanisms known to affect the viral dynamics directly, namely: cell infection, apoptosis and cytolysis of infected cells, as well as viral neutralisation by antibodies; or less directly: apoptosis of naive target cells.
Baseline rates
The values of six baseline rates among the 14 parameters estimated significantly differed between uniphasic and biphasic profiles, presented in relative scale in Fig. 5 (see Additional file 2 for all 14 parameters).
Baseline rates discriminating between uniphasic and biphasic viremia profiles. Baseline rates discriminating between uniphasic and biphasic viremia profiles. Parameters linked to a-b viral multiplication; c adaptive response activation; d-f antiviral (TNFα and IFNα) vs. immuno-modulatory (IL10) cytokine syntheses by activated target cells. Rate values are presented in relative scale, i.e. normalised according to their assumed upper and lower boundaries (see Additional file 5: Table A5-4). Mean value and standard deviation of the 35 representative individuals selected for the uniphasic (green) and biphasic (red) profiles over the parameter ranges. Parameters were significantly different between profiles (⋆p-value <1%, Kolmogorov–Smirnov test)
Firstly, the biphasic profile was characterised by higher baseline rates for cell infection, viral excretion and T-helper activation (Fig. 5a-c), which can explain the higher immune response activation (Characteristic 1) and the higher naive target cell depletion (Characteristic 2).
Secondly, the biphasic profile had higher baseline rates for the synthesis of TNFα and IFNα and lower baseline rates for IL10 (Fig. 5d-f), which can explain the early predominance of antiviral vs. immuno-modulatory cytokines (Characteristic 3). Moreover, as TNFα induces target cell apoptosis and IL10 inhibits the synthesis of TNFα, it can also explain the higher naive target cell depletion (Characteristic 2).
However, no baseline rate differences could directly explain the weaker cytotoxic and neutralisation adaptive responses for biphasic profiles (Characteristic 4). This characteristic probably results from multiple and indirect mechanisms.
Efficacies of immune mechanisms
The efficacy of the key immune mechanisms (cell infection, cell, apoptosis, cytolysis of infected cells and viral neutralisation) for the uniphasic and biphasic profiles are presented in Fig. 6.
Immune mechanism efficacies discriminating between uniphasic and biphasic viremia profiles. Immune mechanism efficacies discriminating between uniphasic and biphasic viremia profiles. Efficacies of mechanisms known to affect the viral dynamics directly: a cell infection, c-e elimination of infected cells, f neutralisation; or indirectly: b apoptosis of naive target cells. Mean value and standard deviation of the 35 representative individuals selected for the uniphasic (green) and biphasic (red) profiles. ∗p-value <1% when comparing uniphasic and biphasic profiles (Kolmogorov–Smirnov tests over two time periods: 0-20, 21–42 days)
Cell infection. Target cell infection results in viral multiplication, but also induces the synthesis of various cytokines and the activation of the adaptive response. The infection efficacy, defined as the total number of cells infected over the total number of naive target cells recruited, was globally low (Fig. 6a). However, it was sufficient to induce host infection with realistic viremia (Fig. 2). The efficacy was significantly higher for biphasic profiles, which underpins the significant difference exhibited by the estimated infection rates. The difference between both profiles was particularly marked for the first time period, i.e. before any viremia rebound occurred. This result suggests that cell infection could be a critical immune mechanism determining viremia profile.
Apoptosis of naive target cells. Naive target cell apoptosis can lead to the depletion of these cells, which could be a critical mechanism to restrain cell infection. However, apoptosis efficacy, defined as the total number of naive target cells undergoing apoptosis over the total number of naive target cells recruited, was significantly higher for biphasic profiles (Fig. 6b). The difference between both profiles was particularly marked for the earlier infection stage. Apoptosis efficacy was globally high for both profiles (21 and 39% on average). These findings showed that naive target cell apoptosis was a critical mechanism and that it could determine the viremia profile.
Elimination of infected cells. Immune response-mediated killing of infected cells plays a fundamental role in preventing continuous production of new viral particles. Our model includes apoptosis and cytolysis as the main mechanisms that kill infected cells. Apoptosis and cytolysis efficacies were defined as the total number of infected cells undergoing apoptosis, respectively cytolysis, over the total number of cells infected. In contrast to the relatively low efficacy of apoptosis (less than 15% on average, Fig. 6c), cytolysis was found to play a major role in the destruction of infected cells for both profiles (higher than 80% on average, Fig. 6d, e).
Natural killer (NK) cytolysis efficacy was low for both profiles (4 to 17% on average) and not significantly higher for biphasic profiles (Fig. 6d) despite significantly higher levels of NK cells (Additional file 1: E). In contrast, Cytotoxic T Lymphocyte (CTL) cytolysis efficacy was high for both profiles (higher than 63% on average) and significantly lower for biphasic profiles (Fig. 6e). The difference in CTL cytolysis efficacies between both profiles was particularly marked at the earlier infection stage. This result suggests that Cytotoxic T Lymphocyte cytolysis could be a critical immune mechanism determining viremia profile, while natural killer cytolysis and infected cell apoptosis would not.
Viral neutralisation by antibodies. Neutralisation of viral particles by antibodies prevents new cell infection. The neutralisation efficacy, defined as the total number of viral particles neutralised over the total number of viral particles created, was low for both profiles (mean values lower than 10%, Fig. 6f). More precisely, this efficacy was almost null for both profiles at the earlier infection stage and significantly lower for the biphasic profile at the later infection stage. This result suggests that viral neutralisation is probably not a critical immune mechanism determining viremia profile.
Validation of immune mechanisms inducing or preventing the biphasic viremia profile
In order to disentangle whether the above candidate immune mechanisms can indeed induce biphasic viremia profiles and could thus be targeted by future pharmaceutical or genetic interventions, we tested whether a viremia profile inversion could be achieved by boosting or reducing the efficacy of either one of these mechanisms. Figure 7 shows the percentages of individuals, among the 35 representative individuals selected per viremia profile, that turned from biphasic to uniphasic viremia profiles, and vice versa (Additional file 3 for more details).
Influence of candidate immune mechanisms on viremia profile inversion. Influence of candidate immune mechanisms on viremia profile inversion. Percentage of individuals, among the 35 representative individuals selected for the uniphasic (green) and biphasic (red) viremia profiles, that had a profile inversion when boosting or inhibiting (depending on the profile) either mechanism: infection, apoptosis of naive target cells by TNFα, cytolysis of infected cells by NK or CTL, viral neutralisation by nAb (standard error bars were obtained by jackknifing)
Most individuals (uniphasic: 77%, biphasic: 100%) had a viremia profile inversion by varying the efficacy of at least one candidate immune mechanism. Varying the NK cytolysis efficacy never induced a viremia profile inversion. Boosting (reducing) the cell infection efficacy induced a profile inversion for almost a quarter of uniphasic (biphasic) individuals. Boosting (reducing) the apoptosis efficacy induced a profile inversion for more than half of uniphasic (biphasic) individuals. Reducing the CTL cytolysis efficacy induced a profile inversion for 37% of the uniphasic individuals, while boosting its efficacy induced a profile inversion for all biphasic individuals. Reducing the neutralisation efficacy never induced a profile inversion for uniphasic individuals, whereas boosting its efficacy resulted in a profile inversion for more than 90% of biphasic individuals.
To conclude, biphasic viremia profiles mainly resulted from high apoptosis and low CTL cytolysis efficacies; moreover, boosting CTL cytolysis or neutralisation efficacy was very efficient to prevent biphasic viremia profile.
Viremia rebound following a steady phase of viral decline is a common but undesirable phenomenon for PRRSv and other viral infections across a range of species [1–3, 19]. The PHGC challenge experiments, in which thousands of pigs were infected with the same dose of a virulent PRRSv strain, revealed substantial between-host variability in infection dynamics with a quarter of pigs exhibiting viremia rebound [PHGC data: 19, 19, 21]. What causes some individuals to experience viremia rebound while others manage to steadily clear the virus has however been subject to much speculation [9, 11, 12, 19, 21, 23]. Our mechanistic within-host infection model, fitted to smoothed PHGC viremia data [19], not only successfully captured the observed between-host variation in infection dynamics but also offers, for the first time, insight into potential causative immune mechanisms for generating rebound. In particular, contrary to current hypotheses emerging from genetic analyses, our model reveals that viremia rebound can occur as a result of between-host differences in the immune competence alone, without the commonly hypothesised emergence of viral escape mutants or re-infection [12, 19, 23]. This finding has profound consequences for the development of intervention strategies, as it would imply that rebound can be prevented by modifying the immune response through pharmaceuticals or genetic selection.
We identified several mechanisms that differed between the uniphasic and biphasic viremia profiles. Firstly, the immune response activation was higher for rebounders, although they elicited on average weaker and less efficacious cytotoxic and neutralisation responses. Rebounders also exhibited a higher cell infection efficacies, despite an early predominance of antiviral cytokines (IFNγ, IFNα, TNFα) over immuno-modulatory cytokines (IL10, TGFβ). Lastly, target cell apoptosis by TNFα was more efficacious for rebounders, which provoked a rapid and strong depletion of naive target cells. All these differences, except for the neutralisation efficacy, occurred prior to the onset of rebound, suggesting that these were critical mechanisms that could determine viremia rebound. These results were confirmed by our validation step of the candidate immune mechanisms. Inhibiting neutralisation never generated a rebound. However, boosting infected cell cytolysis by cytotoxic T lymphocytes or viral neutralisation, as well as inhibiting target cell apoptosis, effectively prevented viremia rebound. Surprisingly, altering the efficacy of infected cell cytolysis by natural killers had no impact on the rebound.
To our knowledge, no published experimental study compared the immune response between uniphasic and biphasic PRRS viremia profiles. Mechanistic models of host response fitted to influenza virus data predicted biphasic (uniphasic) viremia profiles in the presence (absence) of IFNα [2, 10, 28, 29]. This finding is consistent with our results, as we showed that viremia rebounds were associated with significantly higher levels of IFNα.
Modelling approach
It should be noted that rebound patterns can be easily generated with simple models with few broad immune categories that exhibit oscillatory behaviour (see [2] for an elegant example). However, such simplistic models are generally limited in scope, are often dismissed as over-simplistic by experimental biologists, and often fail to reproduce exact patterns of real data [2, 6, 30]. The present study aimed to go a step further: our model aims to reproduce observed viremia characteristics observed in experimentally infected individuals and determine why some individuals experienced rebound while others did not.
A number of mechanistic models of virus infections in a variety of species aim at linking viremia profiles with the immune response, but only three for PRRSv [27, 31, 32]. Of particular relevance to our PRRSv modelling study are influenza models [2, 6, 10, 28–30, 33], as influenza is a respiratory virus that targets antigen presenting cells, among other cells. Moreover, viremia rebounds have been observed in natural influenza infections [2]. Influenza models often provide a fairly simplified representation of the immune response, focusing on the dynamics of few measurable immune components ([26], Chap.1) However, a recent study confronted a number of existing influenza models with experimental data and showed that these models failed to accurately reproduce at least one aspect of the immune response, even though the model parameters had been fitted to the data [10]. Moreover, several studies have pointed out the necessity of more comprehensive models to infer which immune mechanisms determine viremia characteristics [2, 6, 30].
Particularly for PRRSv, a large spectrum of immune mechanisms were found to influence the infection dynamics: target cell permissiveness, viral replication, adaptive response orientation, cytolysis, antibody neutralisation, all being modulated by various cytokines [reviews: 15–17, 25]. These mechanisms were only partly represented in the published models cited above, except in [27, 32] which provided a basis for this study but had a rougher representation of the adaptive response activation. Our study, based on an extension of these latter models, includes all relevant mechanisms and reflects the current understanding on PRRSv within-host dynamics. Our model contains an explicit and detailed representation of both the innate and adaptive immune mechanisms, including complex regulations by major cytokines, but with the minimum number of parameters. As these mechanisms are involved in most infections, this paper provides a more general framework for modelling studies requiring a representation of the global immune response.
Our holistic approach gave rise to several candidate mechanisms underlying differences in infection profiles, which are difficult to observe in experiments. Moreover, our approach illustrates an important point that is often overlooked in statistical data analysis, i.e. that differences in observed levels of immune components do not necessarily imply differences in their immune functions. For example, levels of natural killers were significantly different between both uniphasic and biphasic viremia profiles in our fitted model. However, the NK cytolysis efficacy, i.e the proportion of infected cells cytolysed by NK, was similar for both profiles. Moreover, boosting or inhibiting this efficacy neither generated nor prevented rebound. In contrast, cytotoxic T lymphocytes only differed significantly in efficacy, not in actual levels, but they were highly effective to prevent rebound.
Fitting procedure
A known caveat of complex models such as ours is that they inevitably require many parameters for which no prior estimates exist, thus causing a potential identifiability issue for model fitting [34]. As, on the one hand, we fitted our model only on viremia data and, on the other hand, PRRS experimental studies showed that contrasted immune responses could result in similar viremia (reviewed in [26], chapter 1), we expected our model to be non identifiable. Furthermore, the viremia data set [19] exhibits a large between-host variability within each viremia profile. Consequently, rather than reducing the model complexity and thus significantly limiting the scope of our approach, we chose to deal with this issue by relaxing the uniqueness constraint for model parameter values: we defined fitting criteria and developed a fitting procedure that identifies data-compatible parameter sets instead of one unique parameter set for each individual viremia.
For this purpose, we developed an Approximate Bayesian Computation (ABC)-like fitting procedure that extensively explores the parameter space using an Adaptive Random Search (ARS) algorithm, starting from a large number of initial conditions, This procedure allows an extensive exploration of a high-dimensional parameter space and is computationally less expensive than standard ABC. In total, over 109 model simulations were performed to identify 625 data-compatible parameter sets for each viremia profile. In order to best represent the host diversity within each viremia profile, we used a clustering method to sample the 625 estimated parameter sets, rather than considering the parameter posterior distributions.
So our method does not identify parameter sets that fit individual viremia data, as many classical estimation methods do, nor does it provide posterior distributions, as Bayesian methods do. It ensures, however, that the viral indicators of the selected parameter sets match the observed data ranges. Moreover, it allowed us to overcome the identifiability issues with a reasonable computational cost and and simultaneously capture the between-host variability in individual viremia profiles.
Comparison of model results to literature
The fitted model not only reproduced the wide viremia range observed in the viremia data, but also generated immune response profiles similar to those reported in independent experimental studies. For example, innate immune components mainly peaked one week post infection, whereas the adaptive immune components peaked after week two [14, 35, 36] and neutralising antibodies appeared after week three [14, 24, 37]. Furthermore, cytokine levels varied substantially among simulations, with peak values in agreement with experimental observations [14, 15, 17, 24, 36, 38]. Similarly, the orientation of the adaptive response was highly variable, but on average favoured the humoral response, in line with experimental studies [15–17, 25]. Finally, our model supports experimental results that identified target cell apoptosis by TNFα as a critical mechanism for the early naive target cell depletion [39, 40]. Within-host dynamics selected by our fitting procedure are hence compatible with published experimental data, although we would need more longitudinal data on various immune components to fully validate our model.
However, our model is likely not an accurate representation of the early infection dynamics. Despite its enhanced level of complexity in comparison to previous PRRSv infection models, our model still constitutes a gross over-simplification of the immensely complex fine-tuned immune system. In particular, our model ignores spatial structure despite evidence that infection kinetics are tissue specific and are partly determined by migration of immune components between body compartments (e.g. [20, 41]). This is particularly important at the onset of infection, when immune cells need to be recruited to the infection site. Furthermore, the model does not incorporate time delays for immune initiation or gradual build up of immune efficacies, which are also known to play a key role in infection dynamics (e.g. [42]). These simplifications were necessary in the absence of data to inform the model parameters. However, they lead to the unrealistically sharp rises in viral load (Fig. 2) and some immune response components (Fig. 3) at the early stage of infection.
Furthermore, caution is advised when interpreting the actual estimated model parameter values (relative scales in Additional file 2 & boundaries in Additional file 5: Table A5-4,), as they are affected by several factors. For example, to limit over-parametrisation, the values of some model parameters had to be fixed to somewhat arbitrary values. As a consequence, the values of the remaining model parameters included in the fitting process partly depend on these fixed values [43]. Furthermore, as only mechanisms that had previously been identified to play a role in PRRSv infection dynamics were included in the model, the efficacy of mechanisms represented in the model could be exaggerated as these mechanisms may absorb functions of other immune mechanisms excluded from the model. Lastly, as data to inform parameter estimates for most immune components are extremely sparse in the literature, a conservatively large value range was admitted for the model parameters in both preliminary numerical explorations and fitting process. As a result of all these contributing factors, model parameter estimates may differ from their actual values. This does not affect the model conclusions, which are based on comparison between profiles that were generated under the same model assumptions.
Alternative hypotheses for rebound
Our study clearly illustrates that viremia rebound can originate from differences in the host (genetically regulated) immune response alone. This result appears to stand in conflict with previous genetic studies of the PHGC data, which found that viremia rebound was not heritable and which led to the conclusion that rebound was more likely caused by factors related to the virus or the environment [12, 19, 21]. So we explored the role of host immune genetics on viremia rebound further, focusing on a polymorphism (WUR SNP) previously found to confer significant differences in cumulative viremia within the first 21 days post infection [21, 22]. We found that resistant pigs, i.e. pigs carrying the beneficial allele, were less likely to experience viremia rebound (odds ratio 2.4; 95% CI: [1.2,4.9]). This implies that rebound is partly under host genetic control. The lack of genetic signal found in previous genetic analyses may potentially originate from an improper classification of pigs as rebounders or non-rebounders. Some pigs classified as non-rebounders may have experienced rebound later, i.e. outside the 42 day observation period. This is why we worked on a data subset in this study, in which we selected non-rebounders that would most probably not experience a rebound outside the observation period.
Previous studies proposed a number of alternative mechanisms responsible for the emergence of viremia rebound. These include within-host viral mutations [8, 12, 19], re-infection by infected contact individuals [6, 12] or spontaneous release of the virus from lymph nodes into the blood stream [12, 19]. Our systemic within-host model of a single strain infection cannot provide any insight into the contribution of these mechanisms to viremia rebound. However, the current model could be easily extended to test mutation and re-exposure hypotheses.
Our results have important implications for the development of control strategies, as they suggest that rebound could be prevented by vaccines or genetic control methods targeting specific components of the immune response. We showed that boosting the efficacy of cytotoxic T lymphocytes or neutralising antibodies in our model effectively prevented rebound. Cytotoxic T Lymphocytes and neutralising antibodies are the usual targets of vaccines [15, 17, 25, 36]. However, given the high diversity of circulating PRRSv strains, cross-protection remains a major challenge for PRRSv vaccination [15]. Consequently, vaccines using adjuvants that target non antigen-specific mechanism are particularly relevant. Interestingly, our results indicate that reducing TNFα-induced apoptosis should also prevent rebound, but to our knowledge, such vaccines have not yet been explored [44].
Our results also offer relevant insights for genetic disease control strategies. In particular, they suggest that marker-assisted genetic selection of pigs carrying the identified resistance allele at the WUR SNP would not only reduce the overall virus load of infected pigs anticipated from previous studies, but also reduce the occurrence of viremia rebound. It would therefore potentially help to eliminate the infection faster from the herd. Moreover, the key immune mechanisms that we identified may help to focus ongoing research about the role of the GBP5 candidate gene for this resistance SNP, which is currently poorly understood [45]. Furthermore, recent scientific breakthroughs in blocking the permissiveness of porcine alveolar macrophages to some PRRSv strains by gene editing highlight potential new avenues for genetic disease control [46, 47]. Our model suggests that considerable beneficial effects could already be achieved, even if gene editing only led to a partial reduction of target cell permissiveness.
Finally, our modelling approach, together with the model fitting and validation procedure of candidate immune mechanisms developed in this study, provides a useful template for complementing conventional statistical data analyses with more sophisticated mechanistic models. Such models integrate existing biological understanding and provide new insights into the causative underlying mechanisms of observed statistical associations. This approach can be easily adapted to other virus infections in different host species.
We developed an holistic and comprehensive model of within-host PRRSv infection that represents the large spectrum of immune mechanisms influencing the infection dynamics. This model gave rise to several candidate mechanisms underlying differences in infection profiles, which are difficult to observe in experiments and can not be targeted by simplistic models. In order to identify the model parameter values that allow to generate realistic within-host dynamics, we developed an ABC-like fitting procedure. This method overcome the identifiability issues, a known caveat of such complex models, with a reasonable computational cost and a good representation of the variability among individuals. Our fitted model not only successfully capture the observed between-host variation in infection dynamics but also provide, for the first time, insight into potential causative immune mechanisms for generating PRRS viremia rebound. This finding has profound consequences for the development of intervention strategies, as it would imply that rebound can be prevented by modifying the immune response through pharmaceuticals or genetic selection.
The viremia data considered in this study were derived from longitudinal viremia measures of approximately 1600 weaner pigs experimentally infected with a virulent strain of PRRSv, carried out by the PRRS Host Genetic Consortium (PHGCFootnote 1). A detailed description of the experimental protocol, data collection and molecular techniques can be found in [9, 48]. Briefly, the data result from a primary infection of non-isolated weaner pigs with a highly virulent North American PRRSv strain in controlled conditions, in eight distinct experimental trials (∼200 pigs per trial), carried out at the same high health farm at Kansas State University, following identical protocols. Pigs were commercial cross-breeds provided by different breeding companies, thus exhibiting a large variation in host response [19, 22]. Viremia was found moderately heritable, pointing to significant host genetic influence underlying disease severity and progression [21]. All source farms were free of PRRSv, Mycoplasma hyopneumoniae, and swine influenza virus. Animals were transported at weaning (average age of 21 days) to Kansas State University and randomly placed into pens of 10 to 15 pigs. After a 7-day acclimation period, pigs were experimentally infected, both intramuscularly and intranasally, with 105 TCID50/ml of NVSL-97-7985, a highly virulent PRRSv isolate [49]. Blood samples were collected at 0, 4, 7, 11, 14, 19/21, 28, 35, and 40/42 days post infection (dpi). Serum viremia was measured using a semi-quantitative TaqMan PCR assay for PRRSv RNA. Due to the sensitivity of RT-PCR the detection threshold (and so the threshold of the infection resolution) was set at 10 TCID50/ml.
Visual inspection of individual viremia measures over time confirmed that all animals were infected with peak viremia levels above 103 TCID50/ml and that only a subset of pigs (45%) had managed to clear the infection within the 42-day observation period [19, 22]. The majority of viremia data were uniphasic with a viremia peak ranging between 4 and 11 dpi, but a subset of pigs experienced a viremia rebound (viremia increase after post-peak decline) 3 weeks post infection [21]. A previous analysis showed that an adequate mathematical representation of the full range of viremia data could be obtained by fitting Wood's functions to the longitudinal log-transformed viremia measurements of each pig, using Bayesian inference with a likelihood framework [19]. This approach not only produced for each pig a continuous smoothed viremia curve from 0 to 42 dpi, but also provided a statistical classification of individual data into non-rebounders with a uniphasic viremia profile and rebounders with a biphasic viremia profile [19]. Based on this analysis, 17% of the data were classified as biphasic, indicating that viremia rebound is a genuine phenomenon rather than a measurement error.
In this study, we aimed at identifying the immune mechanisms that discriminate between uniphasic and biphasic viremia profiles, from data observed during a 42-day observation period. To do so, we selected a relevant subset of the smoothed PHGC data [19]. Firstly, we needed to ensure that the uniphasic data would most probably not exhibit a second viremia peak beyond the 42-day observation period. So we only considered data showing a clear resolution of viremia, i.e. viremia measurements below 10 TCID50/ml, the detection threshold, over at least two consecutive weeks within 35 dpi. This more constraining criterion, rather than 42 dpi, was chosen based on the observation that no viremia curve exhibiting 7 consecutive days or more below the detection threshold was classified as biphasic ([19], personal communication). With this first constraint, 20% of the curves initially classified as uniphasic were retained.
Secondly, we wanted the uniphasic and biphasic profiles to be as comparable as possible during the phase corresponding to the first viremia peak, i.e. prior to the rebound onset (0 to 20 dpi). Previous analyses of the complete data set found no significant differences between the two profiles within the first 21 dpi [19]. To reinforce their similarity, we added extra constraints on the following three key profile shape indicators (Fig. 8): the viral peak (Vmax), the date of the viral peak (Tmax) and the standardised viremia decline rate after the peak (\(\mathcal {S}_{V}\)). The latter was defined as follows (with V(t) the viral titer over time t):
$$ \begin{aligned} \mathcal{S}_{V} =& \frac{V_{\text{max}}-V(t=T_{\text{max}}+12)}{12 \times V_{\text{max}}}, \end{aligned} $$
Definition of viral indicators for the uniphasic and biphasic profiles. Definition of viral indicators for the uniphasic and biphasic profiles. For both profiles a-b: viral peak (Vmax), date of viral peak (Tmax), standardised rate of viremia decline after the peak (\(\mathcal {S}_{V}\), defined in Eq. (1)) and infection duration (DI) when defined, i.e. when the viremia remains under the detection threshold until the end of the experiment. For the biphasic profile only b: minimum reached before the second viral peak (VminR), date at which this minimum is reached (TminR), second viral peak (VmaxR), date of second viral peak (TmaxR), viral titer at the end of the experiment Vend when defined, i.e. when the viremia is above the detection threshold. Grey area: viremia lower than the detection threshold (10 TCID50/ml) or after the end of the experiment
The 12-day post-peak time chosen in Eq. (1) corresponds more or less to half the peak value for uniphasic curves; moreover, it always precedes the rebound onset for biphasic curves. We then selected curves with key indicators within the range shared by the uniphasic data preselected in the first step described above and the biphasic data (Table 1). This second and final step resulted in a subset of 131 non-rebounders, representing 12% of all uniphasic data, and 109 rebounders, representing 48% of all biphasic data. Selected data are illustrated in Fig. 9 and provided in Additional file 4.
Uniphasic and biphasic viremia data subsets. Uniphasic and biphasic viremia data subsets. Selection from ([19], smoothed PHGC data) of a 131 (green curves) out of 1091 pigs for the uniphasic profile and b 109 (red curves) pigs out of 227 pigs for the biphasic profile (non selected data in grey). Data smoothed by fitting Wood's functions [19]. Dotted line: detection threshold
Table 1 Summary statistics (minimal min., mean and maximal max. values) of the viral indicators (defined in Fig. 8) for the uniphasic (131 pigs) and biphasic (109 pigs) viremia data subsets (from [19, smoothed PHGC data])
Mathematical model
We used a deterministic model that describes the within-host dynamics induced by a primary PRRSv infection in a PRRSv-naive post-weaning pig. The model represents the mechanisms at the between-cell scale and provides an integrative view of the immune response. It extends a previous model representing the dynamics in the main infection place, the lung, which allowed to identify the immune mechanisms that determine the infection duration [27]. This previous model focused on the interactions between the virus and its main target cells in the lung, the pulmonary macrophages, which are major cells of the innate immune system. It included an explicit and detailed description of the innate immune response and a coarse description of the adaptive response, in addition to the main cytokines and their complex regulations of the immune mechanisms. Compared to this previous model, we mainly detailed the activation and orientation steps of the adaptive response in order to get a more balanced and realistic view of the immune response in the whole pig.
The model describes the evolution over time of the concentration of 19 state variables: the virus, three states for the target cells, the natural killers, five types of effector cells of the adaptive response, the neutralising antibodies, and eight (groups of) cytokines. The functional diagram of the model is shown in Fig. 1. Our modelling assumptions are detailed and justified in Additional file 5, which gives a complete description of the model and corresponding equations. We present below an outline of the model, detailing a few representative key processes and equations, with an emphasis on the adaptive response.
Viral particles and target cells
PRRS viral particles (V) target antigen-presenting cells, consisting of alveolar macrophages, conventional and plasmacytoid dendritic cells. These cells are represented as a functional group with three states: naive (Tn), mature and non-infected (Tm), or mature and infected (Ti).
The infection is initiated by the influx of viral particles into the infection site, represented by an exposure function of time E(t) mimicking the infection protocol [32]. The interaction between viral particles and naive or mature target cells either results in cell infection (rate βT) or phagocytosis of viral particles (rate ηT). These interactions are regulated by cytokines, that either activate κ+(∙), amplify: 1+κ+(∙), or inhibit: κ−(∙) a mechanism. Phagocytosis is amplified by antiviral cytokines (TNFα, IFNα, IFNγ) and inhibited by immuno-modulatory cytokines (IL10, TGFβ); infection regulations are just the opposite. Cell infection results in the excretion of free viral particles (rate eT), representing the replication within the cell and release outside the cell, which is inhibited by antiviral cytokines. Viral particles are subject to natural decay (rate \(\mu _{V}^{\text {nat}}\)) and can be neutralised by antibodies nAb (rate \(\mu ^{\text {ad}}_{V}\)). The resulting viral dynamics, which determines the viremia profiles (as those depicted in Fig. 9), is formalised in the Eq. 2.
Recruitment of naive target cells Tn to the infected site (rate RT) is amplified by pro-inflammatory cytokines Pi (grouping IL1β, IL6, IL8 and CCL2) and IL12 acting in synergy. Through phagocytosis (Tn become Tm) or infection (Tn become Ti), naive target cells are activated and become mature cells. Mature non-infected cells (Tm) eventually revert to the naive state (rate γT). This activation loss is amplified by immuno-modulatory cytokines. In addition to natural decay (rate \(\mu _{T}^{\text {nat}}\)), TNFα induces the apoptosis of target cells (rate \(\mu ^{\text {ap}}_{T}\)). The resulting dynamics of naive target cells is formalised in the Eq. 3.
Similar equations depict the dynamics of infected and mature non-infected (which can be infected) target cells. They include an extra feature: the cytolysis of infected cells by natural killers (NK) and Cytotoxic T Lymphocytes (CTL).
The virus target cells pertain to the innate response. Mature target cells are involved in the virus phagocytosis and in antigen presentation, which activates the adaptive response. The model also includes another innate cell type, the activated natural killers (NK), which cytolyse infected cells. All these cells synthesise various cytokines.
The first step of the adaptive response is the activation (rates \(\alpha _{E}^{T_{m},T_{i}}\)) of helper T cells by mature antigen-presenting target cells (Tm and Ti). Depending on the cytokine environment, they differentiate into the three main CD\(^{+}_{4}\) T lymphocyte subtypes, that determine the adaptive response orientation: the cellular (Ec, type-1 helper T cells), humoral (Eh, type-2 helper T cells) and regulatory (Er, regulatory and type-17 helper T cells) effectors. The humoral subtype is the default and remains so when cytokines IL4 and IL6 (in Pi) predominate over IL12, IFNγ and TGFβ. The dynamics of the humoral effectors appears in the following in the Eq. 4.
The equations of the cellular and regulatory effectors are similar, except for the differentiation term: IL12 and IFNγ favour the cellular response, TGFβ the regulatory response.
Once differentiated, together with viral particles, these three effectors activate plasma cells (B, rate \(\alpha _{B}^{E}\)), which in turn synthesise neutralising antibodies (rate \(\rho ^{B}_{\text {nAb}}\), see Eq. 5.
$$ \begin{aligned} \dot B &= + \alpha_{B}^{E}\,\textstyle\frac{V}{1+V} \left(E_{h} + E_{c} + E_{r}\right) &\quad \leftarrow &\textsf{activation}\\ &\quad + p_{B}\,B\,{\kappa^{-}(\text{TGF}\beta)}&\quad \leftarrow &\textsf{proliferation}\\ &\quad- \mu_{B}^{\text{nat}}\,B &\quad \leftarrow &\textsf{decay}\\ \end{aligned} $$
Moreover, the cellular effectors (Ec), together with mature target cells, activate CD\(^{+}_{8}\) T cells, also called Cytotoxic T Lymphocytes (CTL, rates \(\alpha _{\text {CTL}}^{T_{m},T_{i}}\)), which induce the cytolysis of infected cells.:
$$ {\begin{aligned} \dot{\text{CTL}} &\!=+ \textstyle\sum_{j=m,i}\left(\alpha_{\text{CTL}}^{T_{j}}\,\frac{T_{j}}{1+T_{j}}\right)\,E_{c}&\; \leftarrow &\textsf{activation}\\ &\quad+p_{E}\,\!\text{CTL}\,{\kappa^{-}\!(\text{TGF}\beta)}[1+\kappa^{+}(\text{IL}12)]&\; \leftarrow &\textsf{proliferation}\\ &\quad- \mu_{\text{CTL}}^{\text{nat}}\,\text{CTL}\,[1+\kappa^{+}(\text{TNF}\alpha)]&\; \leftarrow &\textsf{decay}\\ \end{aligned}} $$
Proliferation of all five adaptive effectors (rates pE and pB) is inhibited by TGFβ and amplified by IL12 (except for B). Their natural decay (rates \(\mu _{.}^{\text {nat}}\)) is amplified by TNFα (except for B), which induces their apoptosis. The effectors synthesise various cytokines.
Cytokine regulations
The model accounts for eight cytokines, representing the major cytokines functions: pro-inflammatory (Pi, grouping IL1β, IL6, IL8 and CCL2), antiviral (TNFα, IFNα & IFNγ), immuno-modulatory (IL10 & TGFβ) and immuno-regulatory, the latter being subdivided in pro-cellular (IL12 & IFNγ), pro-humoral (IL4 & IL6) and pro-regulatory (TGFβ) responses.
Cytokines are synthesised by the immune cells. They are involved in the regulation of most infection and immune processes, including the cytokine syntheses. Up-regulations, whether activations (multiply by κ+) or amplifications (multiply by [1+κ+]), and down-regulations (multiply by κ−) depend on the cytokine concentration (Ck). The higher the cytokine concentration, the stronger the effect. However, there is a limited number of receptors, so the effect saturates above a given cytokine concentration. Cytokine effects are hence classically based on the Michaelis–Menten formalism [50–52]:
$$ \kappa^{+}(C_{k}) = \frac{v_{m} \: C_{k}}{k_{m}+C_{k}} \quad \& \quad \kappa^{-}(C_{k}) = \frac{k_{m}}{k_{m}+C_{k}}, $$
where vm denotes the saturation factor and km the half saturation constant. Cytokines may interact: an additive effect of cytokines Ck and Ck′ is represented by κ±(Ck+Ck′), a synergistic effect by κ±(Ck Ck′). To reduce the model complexity, we assumed that the regulation parameters km and vm were the same, whatever the cytokine(s) involved. This simplification was based on our sensitivity analyses, where both parameters were found to exhibit a negligible influence on the viral dynamics (Additional file 5: Figure A5-1).
The observed infection dynamics varies considerably among individuals (Fig. 9). We hypothesised that the variability within and between the uniphasic and biphasic viremia profiles is related to a different balance among immune mechanisms and that it can be captured by our mechanistic model using various parameter sets. In order to identify parameter sets associated with either the uniphasic or the biphasic profile, we fitted the model to the experimental data subsets. Since the model has many parameters, we were faced with a potential identifiability issue for obtaining unique parameter estimates associated with a specific profile. Rather than reducing the model complexity and thus significantly limiting the scope of our approach, we first reduced the number of parameters to estimate and, second, chose an appropriate fitting procedure.
Parameter ranges and selection
There are few experimental or modelling data to inform the parameter values. In previous work [26, 27], we developed a specific procedure to tackle this issue: (i) similar models in the literature provided large ranges for model parameters; (ii) quantitative (for the viremia), semi-quantitative (orders of magnitude, date of peaks, etc. – for immune components) and qualitative (shape – for immune components) data from PRRSv experimental studies were collected to define realistic within-host dynamics; (iii) through an extensive exploration of the parameter space, parameter ranges were refined to obtain realistic dynamics. We hence obtained ranges for all model parameters (Additional file 5: Table A5-4 & Figure A5-2).
To select which parameters to estimate and which to fix, we performed global sensitivity analyses on the whole viral dynamics (Additional file 5). A first sensitivity analysis exploring the influence of (almost) all model parameters exhibited that those parameters had comparable contributions to the viremia variance, so we could not identify a subset of parameters with a marked influence on the viremia.
Consequently, we based our parameter selection on biological knowledge. Hypotheses linking between-host variability in the infection dynamics to immune mechanisms are numerous [15–18, 25, 53]. In order to remain open to all these hypotheses, we selected 14 parameters that are associated with a wide range of relevant mechanisms: the virus capacity to infect the cell and replicate, the target cell capacity to synthesise antiviral vs. immuno-modulatory cytokines, and the activation of the different arms of the adaptive response. To avoid biasing our results, the ranges of the 14 parameters to estimate were set equally for both profiles.
Fixing the remaining parameters to an intermediate value of their respective ranges, we performed a second sensitivity analysis. As previously, it could not single out parameters with a major impact on the viremia, but the exploration of the parameter subspace showed that we covered more than the variability observed in viremia data.
We estimated the parameter values associated with each profile by minimising a criterion which quantifies the distance between a model simulation and the data. Our aim was to identify parameter sets that characterise each viremia profile. Rather than reproducing individual viremia data, as classical fitting criteria do, we defined, for each profile, a criterion based on the whole data range. A data-compatible parameter set was then defined as a set that satisfies (minimises) this criterion. We then looked for not one but several data-compatible parameter sets. To do so, we developed a fitting method that resembles Approximate Bayesian Computation (ABC), but is less computationally costly. This allowed us to specify a full range of data-compatible parameter sets.
Fitting criteria. Before calculating the fitting criterion, the simulation profile was determined. A viremia curve was classified as: (i) uniphasic if and only if (i) it exhibited a single peak above the detection threshold within the first 42 days of infection and (ii) the viremia was below the detection threshold at day 42; (ii) biphasic if and only if it exhibited at least two peaks above the detection threshold within the first 42 days of infection.
If the simulated viremia curve matched the expected profile, the corresponding criterion was computed as follows. Both uniphasic and biphasic criteria were based on the viral indicators (Fig. 8) and the corresponding data ranges (Table 1), rather than the whole viremia dynamics. For each indicator i, the error Δi was defined as the shortest standardised distance between the viral indicator value simulated by the model \(\mathcal {I}^{M}_{i}\) and the corresponding range observed in the data set \(\left [\mathcal {I}^{\min }_{i}, \mathcal {I}^{\max }_{i}\right ]\). The fitting criterion (\(\mathcal {C}\)) was then defined as the sum of squared errors of the relevant viral indicators (n=4 for the uniphasic profile, n=9 for the biphasic profile):
$$ \begin{aligned}\mathcal{C}&=\sum\limits_{i=1}^{n}\Delta_{i}^{2}\\ &\quad \text{with:}\\ \Delta_{i} &= \left\{ \begin{array}{ll} 0 & \text{if }\mathcal{I}^{M}_{i}\in \left[\mathcal{I}^{\min}_{i},\mathcal{I}^{\max}_{i}\right], \\ \frac{\min\left(|\mathcal{I}^{M}_{i}-\mathcal{I}^{\min}_{i}|, |\mathcal{I}^{M}_{i}- \mathcal{I}^{\max}_{i}|\right)}{\left(\mathcal{I}^{\max}_{i}-\mathcal{I}^{\min}_{i}\right)} &\text{else.} \end{array}\right. \end{aligned} $$
Indicator errors were normalised to account for differences in terms of magnitude and ranges. As we aimed for zero-valued errors for all indicators, if some indicator errors had outweighed the others, it could have affected the convergence of the optimisation algorithm. NB: Viral indicators would correspond to the summary statistics in an ABC method. A fitting criterion equal to zero would correspond to an ABC acceptance criterion with: (i) ABC data defined as mean viral indicator values; (ii) ABC tolerance defined as half the viral indicator ranges.
If, in contrast, the simulated viremia curve did not match the expected profile, the corresponding fitting criterion \(\mathcal {C}\) was set to an arbitrarily high value, in order to penalise the corresponding parameter set. If the viremia curve matched a biphasic (resp. uniphasic) profile when a uniphasic (resp. biphasic) was expected, we set \(\mathcal {C}=500\). If the viremia curve exhibited a single viral peak but was unresolved at day 42, we set \(\mathcal {C}=700\). Finally, if the viremia curve did not exhibit any peak (either a steady growth or unsuccessful infection), we set \(\mathcal {C}=1000\).
Implementation and initialisation. The model simulation and model fitting were conducted in Scilab 5.5.3 [54]. The minimisation, i.e. the identification of data-compatible parameter sets resulting in \(\mathcal {C}=0\) (8), was performed using the Adaptive Random Search (ARS) algorithm, for both uniphasic and biphasic profiles independently. ARS is a simple optimisation method, exhibiting good empirical performance: numerous case studies have demonstrated that the algorithm searches efficiently through large and complex search spaces before reaching the perceived global optimum [55], i.e. \(\mathcal {C}=0\) (8) in our case.
In order to thoroughly explore the parameter space, we performed the fitting procedure starting from 625 initial parameter sets that proceeded independently. They were chosen following a fractional factorial design built with the R package planor [56], in order to distribute the algorithm starting points evenly in the parameter space, minimising the computational effort. This method is particularly well suited for high dimensional problems characterised by multiple influential parameters with strong interactions, as was the case in our study (see Additional file 5). 625 corresponds to the number of points required for a resolution IV design with 3 levels per parameter. For each parameter set staring point, the ARS algorithm converged to an optimal parameter set, corresponding to \(\mathcal {C}=0\) (8), within 105.8 iterations on average.
Selection of representative individuals for both viremia profiles. In order to capture the full range of parameter combinations associated with each profile without sampling bias, we generated a representative set of 35 individuals per viremia profile using a clustering method. Indeed, among the 625 individuals of each profile, some were very close, others quite distinct. Consequently, taking into account the full set would have lead to over- or under-representations of some individuals. We proceeded similarly but independently for both viremia profiles. We used a k-means clustering method (kmeans function of R package stats), which partitions the 625 parameter sets obtained by the fitting procedure into a given number of clusters. The number of clusters has to be big enough in order to represent the variability of the parameter sets but small enough to prevent the over-representation of some parameter sets. To set the number of clusters, various heuristics are employed, such as the "elbow rule": between-class inertia increasing with the number of clusters, this rule consists in detecting the cluster number for which adding another cluster does not result in a notable inertia increase. We ran the k-means method for all possible numbers of clusters (1 to 625) and used the "elbow rule" to get a rough idea of the appropriate number of clusters (between 20 and 50). We then selected the smallest number corresponding to 80% between-class inertia for both profiles, namely 35 clusters. The representative individual of each cluster was chosen as the parameter set closest to the cluster barycentre. 35 individuals over the 625 represent a sufficiently large number to cover the full range of parameter combinations associated with each profile and to provide sufficient statistical power to detect differences between both profiles.
Identification of immune mechanisms discriminating between both viremia profiles. In order to identify the key immunological drivers that lead to either uniphasic or biphasic viremia profiles, we compared, for the 35 individuals selected per profile: (i) the dynamics of the immune components represented in the model, over four time periods (0-10, 11–20, 21–31, 32–42 dpi); (ii) the estimated baseline rates of the immune mechanisms; and (iii) the efficacy of key immune mechanisms for the earlier (0 to 20 dpi) and later (21 to 42 dpi) time periods. The efficacy of a particular immune mechanism (e.g. cell infection, viral neutralisation, infected cell cytolysis, etc.) was quantified by the ratio of the total number of viral particles or cells mobilised by the mechanism and the total number of viral particles or cells generated, over the time period considered. As an example, the efficacy of viral neutralisation was defined, from Eq. 2, as the total number of viral particles neutralised \(\: \int _{t=t1}^{t2} \mu _{V}^{\text {ad}}\: V(t)\: \text {nAb}(t) \: dt\) over the total number of viral particles created (exposure + replication): \(\int _{t=t1}^{t2} e_{T} \: T_{i}(t)\:\kappa ^{-}(\text {TNF}\alpha (t)+\text {IFN}\alpha (t)+\text {IFN}\gamma (t))\:+ \:E(t) \: dt\).
Comparisons were performed by visual inspection and standard uni-variate statistical tests: mean values and standard deviations, permutation tests for the trajectories (R package stamod, with 104 permutations) and Kolmogorov–Smirnov for the baseline rates and mechanism efficacies (R package Matching).
Validation of candidate immune mechanisms
Viremia profiles are the result of many immune mechanisms that interact and are regulated by complex feedback loops. However, pharmaceutic interventions can often only target a single mechanism. We therefore tested whether boosting or inhibiting specific key mechanisms could result in a viremia profile inversion, e.g. from uniphasic to biphasic or vice versa.
For this purpose, we carried out additional simulations in which we boosted (resp. inhibited) the values of the parameters driving the efficacy of each mechanism by six gradual levels from ×10 (10−1) to ×103 (resp. 10−3). We performed the simulations on the 35 representative individuals per viremia profile. We focused on mechanisms which directly affect the viral dynamics and are assumed to play a critical role in the infection dynamics [17, 57, 58]: infection (driven by infection rate βT), naive target cell apoptosis by TNFα (apoptosis rate \(\mu _{T}^{\text {ap}}\)), infected cytolysis by natural killers and cytotoxic T lymphocytes (cytolysis rates \(\mu _{T}^{\text {in}}\) and \(\mu _{T}^{\text {ad}}\)), viral particle neutralisation by antibodies (neutralisation rate \(\mu _{V}^{\text {ad}}\)). When a mechanism exhibited a higher (lower) efficacy for the viremia profile of the individual, we decreased (boosted) its efficacy. Then we counted the percentage of individuals that had a profile inversion for at least one of the six levels tested. A simulation of a uniphasic (resp. biphasic) individual qualified for profile inversion if the corresponding viral titer could be classified as biphasic (resp. uniphasic) according to the definition of the viremia profile given in the "Fitting criteria" paragraph above.
Role of host genetics on rebound
Genetic studies of the PHGC data identified a single nucleotide polymorphism (WUR10000125) on chromosome 4, denoted WUR SNP, that was found to confer significant differences in cumulative viremia (i.e. area under the viremia curve) within the first 21 days post infection [21, 22]. So we tested whether the WUR SNP was also associated with rebound. For this purpose, we carried out a logistic regression analysis on our data subset. We categorised pigs into two resistance genotypes, high and low, according to whether or not they carried the beneficial allele at the WUR SNP. We accounted for systematic effects in this analysis [19, 21]. The result of this analysis is presented in the Discussion alone.
PHGC: http://www.animalgenome.org/lunney/index.php
ABC:
Approximate bayesian computation
APC:
Antigen presenting cell
ARS:
Adaptive random search
CTL:
Cytotoxic t lymphocyte
Days post-infection
nAb:
Neutralising antibody
NK:
Natural killer
PHGC:
Porcine host genetic consortium
PRRSv:
Porcine reproductive and respiratory syndrome virus
RNA:
TCID50/ml:
50% Tissue culture infective dose
WUR SNP:
Single nucleotide polymorphism WUR10000125
Spear JB, Benson CA, Portage JC, Paul DA, Landay AL, Kessler HA. Rapid rebound of serum human immunodeficiency virus antigen after discontinuing zidovudine therapy. J Infect Dis. 1988; 158(5):1132–3.
Pawelek KA, Huynh GT, Quinlivan M, Cullinane A, Rong L, Perelson AS. Modeling within-host dynamics of influenza virus infection including immune responses. PLoS Comput Biol. 2012; 8(6):1002588. https://doi.org/10.1371/journal.pcbi.1002588.
Reiner G, Willems H, Pesch S, Ohlinger VF. Variation in resistance to the porcine reproductive and respiratory syndrome virus (PRRSV) in Pietrain and Miniature pigs. J Anim Breed Genet. 2010; 127:100–6. https://doi.org/10.1111/j.1439-0388.2009.00818.x.
Qiu X, Wong G, Fernando L, Audet J, Bello A, Strong J, Alimonti JB, Kobinger GP. mabs and ad-vectored ifn- α therapy rescue ebola-infected nonhuman primates when administered after the detection of viremia and symptoms. Sci Transl Med. 2013; 5(207):207–143207143. https://doi.org/10.1126/scitranslmed.3006605.
Murray JM, Stancevic O, Lütgehetmann M, Wursthorn K, Petersen J, Dandri M. Variability in long-term hepatitis b virus dynamics under antiviral therapy. J Theor Biol. 2016; 391:74–80. https://doi.org/10.1016/j.jtbi.2015.12.005.
Cao P, Yan AW, Heffernan JM, Petrie S, Moss RG, Carolan LA, Guarnaccia TA, Kelso A, Barr IG, McVernon J, et al. Innate immunity and the inter-exposure interval determine the dynamics of secondary influenza virus infection and explain observed viral hierarchies. PLoS Comput Biol. 2015; 11(8):1004334. https://doi.org/10.1371/journal.pcbi.1004334.
Smith AM. A critical combination of bacterial dose and virus-induced alveolar macrophage depletion leads to pneumococcal infections during influenza. J Immunol. 2016; 196(1 Supplement):78–16. https://doi.org/10.1016/j.coi.2015.02.002.
Frank SA. Within-host spatial dynamics of viruses and defective interfering particles. J Theor Biol. 2000; 206(2):279–90. https://doi.org/10.1006/jtbi.2000.2120.
Rowland RRR, Lunney J, Dekkers J. Control of porcine reproductive and respiratory syndrome (PRRS) through genetic improvements in disease resistance and tolerance. Front Genet. 2012; 3:260. https://doi.org/10.3389/fgene.2012.00260.
Dobrovolny HM, Reddy MB, Kamal MA, Rayner CR, Beauchemin CAA. Assessing mathematical models of influenza infections using features of the immune response. PloS ONE. 2013; 8(2):57088. https://doi.org/10.1371/journal.pone.0057088.
Abella G, Pena RN, Nogareda C, Armengol R, Vidal A, Moradell L, Tarancon V, Novell E, Estany J, Fraile L. A WUR SNP is associated with European Porcine Reproductive and Respiratory Virus Syndrome resistance and growth performance in pigs. Res Vet Sci. 2016; 104:117–22. https://doi.org/10.1016/j.rvsc.2015.12.014.
Hess AS, Islam Z, Hess MK, Rowland RR, Lunney JK, Doeschl-Wilson A, Plastow GS, Dekkers JC. Comparison of host genetic factors influencing pig response to infection with two North American isolates of porcine reproductive and respiratory syndrome virus. Genet Sel Evol. 2016; 48(1):43. https://doi.org/10.1186/s12711-016-0222-0.
Zimmerman J, Benfield DA, Murtaugh MP, Osorio F, Stevenson GW, Torremorell M. Porcine reproductive and respiratory syndrome virus (porcine arterivirus) In: Straw BE, Zimmerman JJ, D'Allaire S, Taylor DL, editors. Diseases of Swine, Ninth edn.Oxford, UK: Blackwell: 2006. p. 387–418. Chap. 24.
Darwich L, Díaz I, Mateu E. Certainties, doubts and hypotheses in porcine reproductive and respiratory syndrome virus immunobiology. Virus Res. 2010; 154(1-2):123–32. https://doi.org/10.1016/j.virusres.2010.07.017.
Kimman TG, Cornelissen LA, Moormann RJ, Rebel JMJ, Stockhofe-Zurwieden N. Challenges for porcine reproductive and respiratory syndrome virus (PRRSV) vaccinology. Vaccine. 2009; 27(28):3704–18. https://doi.org/10.1016/j.vaccine.2009.04.022.
Lunney JK, Chen H. Genetic control of host resistance to porcine reproductive and respiratory syndrome virus (PRRSV) infection. Virus Res. 2010; 154(1–2):161–9. https://doi.org/10.1016/j.virusres.2010.08.004.
Murtaugh MP, Genzow M. Immunological solutions for treatment and prevention of porcine reproductive and respiratory syndrome (PRRS). Vaccine. 2011; 29(46):8192–204. https://doi.org/10.1016/j.vaccine.2011.09.013.
Nauwynck HJ, Van Gorp H, Vanhee M, Karniychuk U, Geldhof M, Cao A, Verbeeck M, Van Breedam W. Micro-dissecting the pathogenesis and immune response of PRRSV infection paves the way for more efficient PRRSV vaccines. Transbound Emerg Dis. 2012; 59:50–4. https://doi.org/10.1111/j.1865-1682.2011.01292.x.
Islam ZU, Bishop SC, Savill NJ, Rowland RR, Lunney JK, Trible B, Doeschl-Wilson AB. Quantitative analysis of porcine reproductive and respiratory syndrome (PRRS) viremia profiles from experimental infection: a statistical modelling approach. PLoS ONE. 2013; 8(12):83567. https://doi.org/10.1371/journal.pone.0083567.
Renson P, Rose N, Dimna M, Mahé S, Keranflec'h A, Paboeuf F, Belloc C, Potier M-F, Bourry O. Dynamic changes in bronchoalveolar macrophages and cytokines during infection of pigs with a highly or low pathogenic genotype 1 PRRSV strain. Vet J. 2017; 48(1):15. https://doi.org/10.1186/s13567-017-0420-y.
Boddicker N, Waide EH, Rowland RRR, Lunney JK, Garrick DJ, Reecy JM, Dekkers JCM. Evidence for a major QTL associated with host response to porcine reproductive and respiratory syndrome virus challenge. J Anim Sci. 2012; 90(6):1733–46. https://doi.org/10.2527/jas.2011-4464.
Boddicker NJ, Bjorkquist A, Rowland RR, Lunney JK, Reecy JM, Dekkers JC. Genome-wide association and genomic prediction for host response to porcine reproductive and respiratory syndrome virus infection. Genet Sel Evol. 2014; 46(1):1. https://doi.org/10.1186/1297-9686-46-18.
Chen N, Trible BR, Kerrigan MA, Tian K, Rowland RR. Orf5 of porcine reproductive and respiratory syndrome virus (PRRSV) is a target of diversifying selection as infection progresses from acute infection to virus rebound. Infect Genet Evol. 2016; 40:167–75. https://doi.org/10.1016/j.meegid.2016.03.002.
Murtaugh MP, Xiao Z, Zuckermann F. Immunological responses of swine to porcine reproductive and respiratory syndrome virus infection. Viral Immunol. 2002; 15(4):533–47. https://doi.org/10.1089/088282402320914485.
Thanawongnuwech R, Suradhat S. Taming PRRSV: Revisiting the control strategies and vaccine design. Virus Res. 2010; 154(1–2):133–40. https://doi.org/10.1016/j.virusres.2010.09.003.
Go N. Modelling the immune response to the porcine reproductive and respiratory syndrome virus. Phd thesis. (December 2014). Spécialité : Sciences du vivant. https://hal.inria.fr/tel-01100983/.
Go N, Bidot C, Belloc C, Touzeau S. Integrative model of the immune response to a pulmonary macrophage infection: what determines the infection duration?PLoS ONE. 2014; 9(9):107818. https://doi.org/10.1371/journal.pone.0107818.
Baccam P, Beauchemin C, Macken CA, Hayden FG, Perelson AS. Kinetics of influenza A virus infection in humans. J Virol. 2006; 80(15):7590–9. https://doi.org/10.1128/JVI.01623-05.
Handel A, Longini IM, Antia R. Towards a quantitative understanding of the within-host dynamics of influenza A infections. J R Soc Interface. 2010; 7(42):35–47. https://doi.org/10.1098/rsif.2009.0067.
Saenz RA, Quinlivan M, Elton D, MacRae S, Blunden AS, Mumford JA, Daly JM, Digard P, Cullinane A, Grenfell BT, McCauley JW, Wood JLN, Gog JR. Dynamics of influenza virus infection and pathology. J Virol. 2010; 84(8):3974–83. https://doi.org/10.1128/JVI.02078-09.
Doeschl-Wilson A, Galina-Pantoja L. Using mathematical models to gain insight into host-pathogen interaction in mammals: porcine reproductive and respiratory syndrome In: Barton AW, editor. Host–pathogen Interactions: Genetics, Immunology and Physiology. Immunology and Immune System Disorders. Chap. 4. New York, USA: Nova Science Publishers: 2010. p. 109–31. https://doi.org/10.1017/S1751731107691848.
Go N, Belloc C, Bidot C, Touzeau S. Why, when and how should exposure be considered at the within-host scale? a modelling contribution to prrsv infection. Math Med Biol J IMA. 2018. https://doi.org/doi:10.1093/imammb/dqy005.
Lee HY, Topham DJ, Park SY, Hollenbaugh J, Treanor J, Mosmann TR, Jin X, Ward BM, Miao H, Holden-Wiltse J, Perelson AS, Zand M, Wu H. Simulation and prediction of the adaptive immune response to influenza A virus infection. J Virol. 2009; 83(14):7151–65. https://doi.org/10.1128/JVI.00098-09.
Doeschl-Wilson A, Wilson A, Nielsen J, Nauwynck H, Archibald A, Ait-Ali T. Combining laboratory and mathematical models to infer mechanisms underlying kinetic changes in macrophage susceptibility to an RNA virus. BMC Syst Biol. 2016; 10(1):101. https://doi.org/10.1186/s12918-016-0345-5.
Murtaugh MP. PRRS immunology: What are we missing? In: 35th Annual Meeting of the American Association of Swine Veterinarians. Des Moines: American Association of Swine Veterinarians: 2004. p. 359–67.
Gómez-Laguna J, Salguero FJ, Pallarés FJ, Carrasco L. Immunopathogenesis of porcine reproductive and respiratory syndrome in the respiratory tract of pigs. Vet J. 2013; 195(2):148–55. https://doi.org/10.1016/j.tvjl.2012.11.012.
Mateu E, Díaz I. The challenge of PRRS immunology. Vet J. 2008; 177:345–51. https://doi.org/10.1016/j.tvjl.2007.05.022.
Yoo D, Song C, Sun Y, Du Y, Kim O, Liu H-C. Modulation of host cell responses and evasion strategies for porcine reproductive and respiratory syndrome virus. Virus Res. 2010; 154(1–2):48–60. https://doi.org/10.1016/j.virusres.2010.07.019.
Labarque G, van Gucht S, Nauwynck H, van Reeth K, Pensaert M. Apoptosis in the lungs of pigs infected with porcine reproductive and respiratory syndrome virus and associations with the production of apoptogenic cytokines. Vet Res. 2003; 34:249–60. https://doi.org/doi:10.1051/vetres:2003001.
Miller LC, Fox JM. Apoptosis and porcine reproductive and respiratory syndrome virus. Vet Immunol Immunop. 2004; 102(3):131–42. https://doi.org/10.1016/j.vetimm.2004.09.004.
Beyer J, Fichtner D, Schirrmeir H, Polster U, Weiland E, Wege H. Porcine reproductive and respiratory syndrome virus (PRRSV): kinetics of infection in lymphatic organs and lung. J Vet Med B Infect Dis Vet Public Health. 2000; 47(1):9–25. https://doi.org/10.1046/j.1439-0450.2000.00305.x.
Fenton A, Lello J, Bonsall M. Pathogen responses to host immunity: the impact of time delays and memory on the evolution of virulence. Proc R Soc Lond B Biol Sci. 2006; 273(1597):2083–90. https://doi.org/10.1098/rspb.2006.3552.
Raue A, Kreutz C, Maiwald T, Bachmann J, Schilling M, Klingmüller U, Timmer J. Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood. Bioinformatics. 2009; 25(15):1923–9. https://doi.org/doi:10.1093/bioinformatics/btp358.
Charerntantanakul W. Adjuvants for porcine reproductive and respiratory syndrome virus vaccines. Vet Immunol Immunop. 2009; 129(1):1–13. https://doi.org/10.1016/j.vetimm.2008.12.018.
Koltes JE, Fritz-Waters E, Eisley CJ, Choi I, Bao H, Kommadath A, Serão NV, Boddicker NJ, Abrams SM, Schroyen M, et al. Identification of a putative quantitative trait nucleotide in guanylate binding protein 5 for host response to PRRS virus infection. BMC Genomics. 2015; 16(1):412. https://doi.org/10.1186/s12864-015-1635-9.
Burkard C, Lillico SG, Reid E, Jackson B, Mileham AJ, Ait-Ali T, Whitelaw CBA, Archibald AL. Precision engineering for PRRSV resistance in pigs: Macrophages from genome edited pigs lacking CD163 SRCR5 domain are fully resistant to both PRRSV genotypes while maintaining biological function. PLoS Pathog. 2017; 13(2):1006206. https://doi.org/10.1371/journal.ppat.1006206.
Whitworth KM, Prather RS. Gene editing as applied to prevention of reproductive porcine reproductive and respiratory syndrome. Mol Reprod Dev. 2017; 89(9):926–33. https://doi.org/10.1002/mrd.22811.
Lunney JK, Steibel JP, Reecy JM, Fritz E, Rothschild MF, Kerrigan M, Trible B, Rowland RR. Probing genetic control of swine responses to PRRSV infection: Current progress of the prrs host genetics consortium. BMC Proc. 2011; 5(4):30. https://doi.org/10.1186/1753-6561-5-S4-S30.
Truong HM, Lu Z, Kutish GF, Galeota J, Osorio FA, Pattnaik AK. A highly pathogenic porcine reproductive and respiratory syndrome virus generated from an infectious cDNA clone retains the in vivo virulence and transmissibility properties of the parental virus. Virology. 2004; 325(2):308–19. https://doi.org/10.1016/j.virol.2004.04.046.
Gammack D, Ganguli S, Marino S, Segovia-Juarez J, Kirschner D. Understanding the immune response in tuberculosis using different mathematical models and biological scales. Multiscale Model Sim. 2005; 3(2):312–45. https://doi.org/10.1137/040603127.
Marino S, Myers A, Flynn JL, Kirschner DE. TNF and IL-10 are major factors in modulation of the phagocytic cell environment in lung and lymph node in tuberculosis: A next-generation two-compartmental model. J Theor Biol. 2010; 265(4):586–98. https://doi.org/10.1016/j.jtbi.2010.05.012.
Wigginton JE, Kirschner D. A model to predict cell-mediated immune regulatory mechanisms during human infection with Mycobacterium tuberculosis. J Immunol. 2001; 166:1951–67. https://doi.org/10.4049/jimmunol.166.3.1951.
Darwich L, Gimeno M, Sibila M, Díaz I, de la Torre E, Dotti S, Kuzemtseva L, Martin M, Pujols J, Mateu E. Genetic and immunobiological diversities of porcine reproductive and respiratory syndrome genotype I strains. Vet Microbiol. 2011; 150(1–2):49–62. https://doi.org/10.1016/j.vetmic.2011.01.008.
Scilab. Scilab Enterprises. Open source software for numerical computation. Version 5.5.2; 2015. http://www.scilab.org/.
Walter É, Pronzato L. Optimization - general methods. In: Identification of parametric models: from experimental data. Communications and Control Engineering. London: Springer: 1997. p. 216–9.
Monod H, Bouvier A, Kobilinsky A. planor: generation of regular factorial designs. In: Automatic generation of regular factorial designs, including fractional designs, orthogonal block designs, row-column designs and split-plots. R package version 1.3-7: 2017. https://CRAN.R-project.org/package=planor.
Xiao Z, Batista L, Dee S, Halbur P, Murtaugh MP. the level of virus-specific T-cell and macrophage recruitment in porcine reproductive and respiratory syndrome virus infection in pigs is independent of virus load. J Virol. 2004; 78:5923–33. https://doi.org/10.1128/JVI.78.11.5923-5933.2004.
Diaz I, Darwich L, Pappaterra G, Pujols J, Mateu E. Immune responses of pigs after experimental infection with a european strain of porcine reproductive and respiratory syndrome virus. J Gen Virol. 2005; 86(7):1943–51. https://doi.org/10.1099/vir.0.80959-0.
The authors are grateful to Inria Sophia Antipolis – Méditerranée "NEF" computation cluster for providing resources and support; to Chris Pooley for his constructive comments on our work; to the PRRS Host Genetics Consortium and particularly Joan K. Lunney for her input on PRRSv data and immunology.
Financial support (design of the study, analysis, and interpretation of simulations and in writing the manuscript) was provided by the French National Institute for Agricultural Research (INRA) and the French National Research Agency (ANR), program "Investments for the Future", project ANR- 10-BINF-07 (MIHMES). Furthermore, ADW's contribution was funded by the BBSRC Institute Strategic Programme Grant ISPG 2, and by the EU Horizon 2020 project SAPHIR (Project No. 633184).
Smoothed experimental data used for this study are provided as Additional file 4. Raw data from simulation study, as well as the scilab code of our model and fitting of the model are available upon request. Please contact the corresponding author Natacha Go.
Suzanne Touzeau and Andrea Doeschl-Wilson contributed equally.
BIOEPAR, INRA, Oniris, Route de Gachet, CS 40706, Nantes, France
Natacha Go & Catherine Belloc
BIOCORE, Inria, INRA, CNRS, UPMC Univ Paris 06, Université Côte d'Azur, 2004 route des Lucioles, BP 93, Sophia Antipolis, France
Natacha Go & Suzanne Touzeau
Division of Genetics and Genomics, The Roslin Institute, Easter Bush, Midlothian, UK
Natacha Go, Zeenath Islam & Andrea Doeschl-Wilson
ISA, INRA, CNRS, Université Côte d'Azur, 400 route des Chappes, BP 167, Sophia Antipolis, France
Suzanne Touzeau
Natacha Go
Zeenath Islam
Catherine Belloc
Andrea Doeschl-Wilson
Conceptualisation: NG, ADW, ST, CB Data Curation: NG, ADW, ST, ZI Formal Analysis: NG, AW, ST Funding Acquisition: CB, ST, ADW Investigation: NG Methodology: NG, ST, ADW Project Administration: nothing Resources: ADW, ST, ZI Software: NG Supervision: ST, ADW Validation: nothing Visualisation: NG Writing Original Draft Preparation: NG, ST, ADW Writing Review & Editing: NG, ST, ADW, CB, ZI. All authors read and approved the final manuscript.
Correspondence to Natacha Go.
Not applicable – the study only used smoothed data from experimental pig infections and simulated data.
Experimental data we used in this study are smoothed data published in [19], from raw experimental data initially provided by the PHGC [9, 48]: http://www.animalgenome.org/lunney/index.php &. The study was approved by the Kansas State University Institutional Animal Care and Use Committee (IACUC) under registration number 3000.
Additional file 1
Within-host dynamics. Evolution over infection time of the 19 state variables of the model: free viral particles, target cells (APC), natural killers, adaptive effectors including cytotoxic T lymphocytes, plasma cells, neutralising antibodies and cytokines. Comparison between uniphasic and biphasic viremia profiles. (PDF 124 kb)
Estimated parameters. Values of the 14 parameters linked to between-host variability: host–virus interactions, viral replication, activation of the adaptive response, cytokine syntheses by activated target cells; for the 35 representative individuals selected for the uniphasic and biphasic viremia profiles. Comparison between uniphasic and biphasic viremia profiles. (PDF 87 kb)
Mechanisms resulting in viremia profile inversion. Percentage of individuals, among the 35 representative individuals selected for the uniphasic and biphasic viremia profiles, that had a profile inversion when boosting or inhibiting (depending on the profile) either mechanism (one at a time): infection, apoptosis by TNFα, cytolysis by cytotoxic T lymphocytes, neutralisation by antibodies. Cytolysis by natural killers never resulted in a profile inversion (not shown). (PDF 87 kb)
Smoothed viremia data. Parameters of the Wood's functions fitted to the PHGC data [19], for the selected individuals exhibiting uniphasic or biphasic profiles. (PDF 121 kb)
Model description & Sensitivity analyses. The file provides a complete description of the dynamic model representing the within-host dynamics induced by a primary PRRSv infection in a naive pig. It specifies the modelling assumptions and includes all model equations. The file also describes the global sensitivity analyses performed to assess the impact of model parameters on the viral dynamics. Corresponding aims, methods and results are presented. (PDF 710 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Go, N., Touzeau, S., Islam, Z. et al. How to prevent viremia rebound? Evidence from a PRRSv data-supported model of immune response. BMC Syst Biol 13, 15 (2019). https://doi.org/10.1186/s12918-018-0666-7
Immunological model
ABC-like optimisation method
Rebounder viremia profile
PRRSv
Systems physiology, pharmacology and medicine
|
CommonCrawl
|
Digit (unit)
The digit or finger is an ancient and obsolete non-SI unit of measurement of length. It was originally based on the breadth of a human finger.[1] It was a fundamental unit of length in the Ancient Egyptian, Mesopotamian, Hebrew, Ancient Greek and Roman systems of measurement.
In astronomy a digit is one twelfth of the diameter of the sun or the moon.[2]
History
Ancient Egypt
The digit, also called a finger or fingerbreadth, is a unit of measurement originally based on the breadth of a human finger. In Ancient Egypt it was the basic unit of subdivision of the cubit.[1]
On surviving Ancient Egyptian cubit-rods, the royal cubit is divided into seven palms of four digits or fingers each.[3] The royal cubit measured approximately 525 mm,[4] so the length of the ancient Egyptian digit was about 19 mm.
Ancient Egyptian units of length[5]
NameEgyptian nameEquivalent Egyptian valuesMetric equivalent
Royal cubit
meh niswt
7 palms or 28 digits525 mm
Fist6 digits108 mm
Hand5 digits94 mm
Palm
shesep
4 digits75 mm
Digit
djeba
1/4 palm19 mm
Mesopotamia
In the classical Akkadian Empire system instituted in about 2250 BC during the reign of Naram-Sin, the finger was one-thirtieth of a cubit length. The cubit was equivalent to approximately 497 mm, so the finger was equal to about 17 mm. Basic length was used in architecture and field division.
Mesopotamian units of length
UnitRatio Metric
equivalent
Sumerian Akkadian Cuneiform
grain 1/180 2.8 mm še uţţatu 𒊺
finger 1/3017 mm šu-si ubānu 𒋗𒋛
foot2/3331 mm šu-du3-a šīzu 𒋗𒆕𒀀
cubit1497 mm kuš3 ammatu 𒌑
Britain
A digit (lat. digitus, "finger"), when used as a unit of length, is usually a sixteenth of a foot or 3/4" (1.905 cm for the international inch).[6] The width of an adult human male finger tip is indeed about 2 centimetres. In English this unit has mostly fallen out of use, as do others based on the human arm: finger (7/6 digit), palm (4 digits), hand (16/3 digits), shaftment (8 digits), span (12 digits), cubit (24 digits) and ell (60 digits).
It is in general equal to the foot-nail, although the term nail can also be used as 1/16 of yard and other units.
Astronomy
In astronomy a digit is, or was until recently, one twelfth of the diameter of the sun or the moon.[2][7] This is found in the Moralia of Plutarch, XII:23,[8] but the definition as exactly one twelfth of the diameter may be due to Ptolemy. Sosigenes of Alexandria had observed in the 1st century AD that on a dioptra, a disc with a diameter of 11 or 12 digits (of length) was needed to cover the moon.[9]
The unit was used in Arab or Islamic astronomical works such as those of Ṣadr al‐Sharīʿa al‐Thānī (d.1346/7),[10] where it is called Arabic: إصبعا iṣba' , digit or finger.[11]
The astronomical digit was in use in Britain for centuries. Heath, writing in 1760, explains that 12 digits are equal to the diameter in eclipse of the sun, but that 23 may be needed for the Earth's shadow as it eclipses the moon, those over 12 representing the extent to which the Earth's shadow is larger than the Moon.[12] The unit is apparently not in current use, but is found in recent dictionaries.[7]
Alcoholic Beverages
A 'finger' of an alcoholic beverage is colloquially referred to as a 'digit'.
See also
• Finger (unit)
• Finger tip unit
• Cubit
• Anthropic units
References
1. Hosch, William L. (ed.) (2010) The Britannica Guide to Numbers and Measurement New York, NY: Britannica Educational Publications, 1st edition. ISBN 978-1-61530-108-9, p.203
2. Chisholm, Hugh, ed. (1911). "Digit" . Encyclopædia Britannica. Vol. 8 (11th ed.). Cambridge University Press. p. 268.
3. Selin, Helaine, ed. (1997). Encyclopaedia of the History of Science, Technology and Medicine in non-Western Cultures. Dordrecht: Kluwer. ISBN 978-0-7923-4066-9.
4. Lepsius, Richard (1865). Die altaegyptische Elle und ihre Eintheilung (in German). Berlin: Dümmler.
5. Clagett, Marshall (1999). Ancient Egyptian Science, A Source Book. Volume 3: Ancient Egyptian Mathematics. Philadelphia: American Philosophical Society. ISBN 978-0-87169-232-0.
6. Ronald Edward Zupko (1985). A dictionary of weights and measures for the British Isles: the Middle Ages to the twentieth century. American Philosophical Society. pp. 109–10. ISBN 978-0-87169-168-2. Retrieved 15 January 2012.
7. Macdonald, A.M. (ed.) (1972) Chambers Twentieth Century Dictionary Edinburgh: W. & R. Chambers ISBN 0-550-10206-X, "digit"
8. Plutarchus Chaeronensis, Frank Cole Babbitt (trans.) (1957) Plutarch's Moralia: In fifteen volumes London: William Heinemann, Cambridge, Mass.: Harvard University Press, Volume XII p.144
9. Neugebauer, Otto (1975) A History of Ancient Mathematical Astronomy Berlin: Springer, ISBN 978-0-387-06995-1 Volume 2, p.658
10. Hockey, Thomas et al. (eds.) (2007) The Biographical Encyclopedia of Astronomers, Springer Reference New York: Springer pp. 1002–1003
11. 'Ubayd Allāh ibn Mas'ūd Ṣadr al-S̆arīaẗ al-Aṣġar al-Maḥbūbī, Ahmad S. Dallal (1995) An Islamic response to Greek astronomy: kitāb Ta'dīl hay'at al-aflāk of Ṣadr al-Sharī'a (in Arabic and English) Leiden, New York: E.J. Brill, ISBN 978-90-04-09968-5 p.212
12. Heath, Robert (1760). Astronomia accurata; or ... subservient to the three principal Subjects. London: Published by the author. p. ix.
|
Wikipedia
|
Is it really ~648.69 km/s delta-v to "land" on the surface of the Sun?
According to the following diagram it would require ~648.69 km/s to do all of the intermediate transfers that would land you at or around the sun's surface at perigee. Is this a real number? How would they have calculated this number? Is the sun's mass and other quantities known well enough for this to be absolutely accurate? It seems like a massive number; am I reading this wrong?
I understand partially the vis-viva equations, but not in terms of the sun. It seems odd to use it around the body you're currently orbiting. Do I have to think of it in terms of a different frame of reference? I was thinking this is the delta-v for a non-circularized orbit with a lowest point being at the suns surface and the highest at Low Sun Orbit (that's my thought on the 440 number)... but I'm not sure.
delta-v the-sun
Magic Octopus UrnMagic Octopus Urn
$\begingroup$ Seems plausible. The delta v for landing is mostly cancellation of orbital speed, and solar orbit speed at Earth's altitude is 30km/s — from low solar orbit it would be much higher. WP gives the sun's mass to four decimal places; I don't know if it's better known than that. $\endgroup$ – Russell Borogove Aug 8 '19 at 15:23
$\begingroup$ If you had the spacecraft that could tolerate the heat of the surface, you sould be able to aerobrake, so that chart should have a red arrow, reducing the delta v requirement by as much aerobrake as you can tolerate. $\endgroup$ – Joshua Aug 8 '19 at 23:22
$\begingroup$ @Joshua Aerobraking changes how much delta-V you need from your rocket, but not how much dV you actually need to change orbits. It's just that part of the dV can come from atmospheric drag instead of your horribly inefficient rocket engine. The problem is that this is not a nice number that works for every spacecraft, unlike the total dV - it depends the aerodynamics and size of your craft, the mission parameters etc.; the blue number is actually the minimal dV you need, regardless of how you achieve that dV. $\endgroup$ – Luaan Aug 9 '19 at 6:05
$\begingroup$ @Joshua: "Aerobraking in the photosphere" is one of the more hair-raising concepts I've heard of in space travel, but I suppose it'd technically be possible. We'll have to get right on that if we need a harder challenge after we colonize Venus. $\endgroup$ – Michael Seifert Aug 9 '19 at 17:21
$\begingroup$ "It seems like a massive number; am I reading this wrong?" The sun is massive - about 99% of the mass of the Solar System. $\endgroup$ – Jacob Krall Aug 9 '19 at 22:46
is the sun's mass and other quantities known well enough for this to be absolutely accurate?
Well, the key to this is the vis-viva equation in your question. It's not actually important for us to know the mass of the Sun precisely, so long as we know the product $GM$ (another answer makes mention of this).
And that product is, of course, the same for any body orbiting the Sun, the key insight behind Kepler's third law. So if we have an accurate knowledge of the period and semimajor axis of any body orbiting the Sun (including Earth) we have an accurate picture of $GM$. Well, we know Earth's orbital period to better than a millisecond per year (10^-12 fractional accuracy), and Earth's semimajor axis to better than 4km in 1 AU (10^-7 fractional accuracy) so we should know that GM figure to about 1 part in 10 million (all the various exponents conveniently cancel out). Seems pretty good to me.
In fact, $G$ is the one that we have the least handle on (we know it to a little better than 4 digits, according to CODATA), and that is what limits our estimate of the mass of the Sun to the same precision (in fact, Wikipedia's info box says the sun is $1.9885 \times 10^{30}$ kg and then links to an archived older version of the same page that says $1.9891 \times 10^{30}$ kg). but A) that's not a terrible estimate anyway, and B) as already pointed out, it's immaterial because we know $GM$ better than we know $G$ or $M$.
hobbshobbs
The Vis-viva equation is
$$ v = \sqrt{ GM \left(\frac{2}{r} - \frac{1}{a} \right) }, $$
The $GM$ product for the Sun is 1.327E+20 m^3/s^2.
If 1 AU is 150E+09 meters, then when you are in a circular ($r=a$) Heliocentric orbit at Earth escape/capture point your velocity is 29.7 km/s.
If you then change to an ellipse with aphelion still at 1 AU and perihelion at the surface of the Sun, then $a = (1 AU +695700 km)/2 = 75.35 $ million km and your velocity at aphelion ($r= 1AU$) is now only 2.86 km/s, and at perihelion (grazing the Sun) it's 616.2 km/s. You've had to lose $\Delta v$ of 26.9 km/s to get into that orbit, and you'll have to lose another 616.2 km/s to stop on the Sun the next time you graze it.
That's 643.1 km/s, plus the 12.6 km/s to get from Earth's surface to Earth escape/capture, so the total I get is 655.7 km/s.
That's pretty close to your number. I've treated the Earth's orbit as a circle at 150 million km exactly, which it isn't.
Steven Martin
$\begingroup$ Wow, when explained like that it's much easier to understand the math behind those charts. I was definitely overthinking the application of that equation... Thanks for expanding on the actual calculations. $\endgroup$ – Magic Octopus Urn Aug 8 '19 at 18:13
$\begingroup$ @MagicOctopusUrn: Of course, you could eliminate over 90% of the $\Delta v$ requirement by aerobraking on the Sun. Surviving that maneuver is left as an engineering exercise, but then again, if you're going to land on the Sun you presumably have some way of dealing with it, anyway. $\endgroup$ – Ilmari Karonen Aug 10 '19 at 0:49
$\begingroup$ @IlmariKaronen delta-v is just a measure of absolute value of acceleration, independent of how it is produced. You aren't eliminating it, you are just naming your methods. $\endgroup$ – uhoh Aug 10 '19 at 1:04
It's a "real number", a delta-v chart isn't really concerned with the realism of various missions. It's quite simply a table of velocity changes, how you would go about achieving these velocity changes is outside its scope.
As to how to calculate it, the parts of the journey in space can be calculated like any other part of the diagram, using the patched conic approximation to split it up into two body systems, and from then the Vis-viva equation (written on the chart). The other lading/ascension values on the chart appears to consider atmospheric drag, but they skipped it for the Sun since any realistic drag model for a spacecraft there doesn't exist. It's just the speed you would crash into the Sun with from low orbit. (or more accurately, the way you have described it in your question, as a small elliptical transfer). Using the Vis-viva equation for the body you are currently orbiting isn't odd at all, it's the normal use case.
Are the relevant quantities for the Sun known well enough to give a reliable number? Yes and no. The Sun's mass is known very accurately due to accurate measurements of the planets orbiting it, but the other relevant number, the radius is more so-so. The Sun doesn't really have a solid surface, so the surface radius is more a question of definition than accuracy of measurements.
Lastly, it seems like a massive number, can it be right? Going from the surface of a body, and then navigating some in space, you would expect spending about the same delta-v as the escape velocity of the body. Given the Sun's escape velocity of 620km/s, this seems correct.
HohmannfanHohmannfan
$\begingroup$ Just want to clarify, that the 440 delta-v is not for a "circular orbit at the surface of the sun" (I know that doesn't make sense) but an orbit at which Apogee is LSO and Perigee is Sun Surface? However, the 178 delta-v is a transfer from Earth orbit to a circular Low Sun Orbit both probably approximated with 0 eccentricity? +1 anyway, for the links and the factoids. $\endgroup$ – Magic Octopus Urn Aug 8 '19 at 15:52
$\begingroup$ 1. Yes, that's correct. I can edit that in to better address that part of the question. 2. Not quite. The node at 178 distance is the Earth-LSO transfer, at which point you are already in orbit around the Sun in an ellipse touching LSO and Earth's orbit. $\endgroup$ – Hohmannfan Aug 8 '19 at 15:54
$\begingroup$ Ah, so literally none of these are transfers to circular orbits, just the delta-v for the first burn (assumed instantaneous) at the apogee (being LEO) to lower from LEO to LSO on only one side of the orbit. Thanks! $\endgroup$ – Magic Octopus Urn Aug 8 '19 at 18:11
$\begingroup$ @MagicOctopusUrn Some of the numbers in the diagram are to or from circular orbits. The numbers are for the conic element of the flight plan. Example: Earth<->Moon. 9.4km/s from ground to 250x250km orbit, 2.44+0.68 for TLI, 0.14 to capture into a highly elliptical orbit around the moon, 2.94 to go from elliptical into 400x400km circular, and then 1.73 to the surface. On the way back you need 1.73km surface -> 400x400km circular, then 0.68+0.14 to get back on a trajectory that intersects the earths atmosphere, and the other numbers are void due to aerobraking. $\endgroup$ – Polygnome Aug 9 '19 at 8:20
Not the answer you're looking for? Browse other questions tagged delta-v the-sun or ask your own question.
Are Delta-V requirements for leaving the surface of a planet proportional to gravity?
Calculating solar system escape and and sun-dive delta V from lower Earth orbit
Nuking the Sun?
Do any current ICBM's have the delta-V to target the sun?
Is this really an image of the sun, or an "artist's conception"?
How much delta v does it take to get to the Sun-Earth Lagrange 3 point?
Could a spacecraft be propelled by the deflection of a very high number of charged particles?
Delta V to get to the Sun-Earth Lagrange Point 1?
What is delta-v calculated in reference to?
|
CommonCrawl
|
Triangle $ABC$, $ADE$, and $EFG$ are all equilateral. Points $D$ and $G$ are midpoints of $\overline{AC}$ and $\overline{AE}$, respectively. If $AB=4$, what is the perimeter of figure $ABCDEFG$? [asy]
/* AMC8 2000 #15 Problem */
draw((0,0)--(4,0)--(5,2)--(5.5,1)--(4.5,1));
draw((0,0)--(2,4)--(4,0));
draw((3,2)--(5,2));
label("$A$", (3.7,0), SE);
label("$B$", (0,0), SW);
label("$C$", (2,4), N);
label("$D$", (2.8,2), NE);
label("$E$", (4.8,2), NE);
label("$F$", (5.3,1.05), SE);
label("$G$", (4.3, 1.05), SE);
[/asy]
We have \begin{align*}
AB+BC+CD+DE+EF+FG+GA&=\\
4+4+2+2+1+1+1&=\boxed{15}\\
\end{align*}
|
Math Dataset
|
Skip to main content Skip to sections
Inventiones mathematicae
Every classifiable simple C*-algebra has a Cartan subalgebra
Xin Li
First Online: 16 August 2019
We construct Cartan subalgebras in all classifiable stably finite C*-algebras. Together with known constructions of Cartan subalgebras in all UCT Kirchberg algebras, this shows that every classifiable simple C*-algebra has a Cartan subalgebra.
Mathematics Subject Classification
Primary 46L05 46L35 Secondary 22A22
Classification of C*-algebras has seen tremendous advances recently. In the unital case, the classification of unital separable simple nuclear \({\mathcal {Z}}\)-stable C*-algebras satisfying the UCT is by now complete. This is the culmination of work by many mathematicians. The reader may consult [12, 20, 24, 34, 44] and the references therein. In the stably projectionless case, classification results are being developed (see [13, 14, 15, 18, 19]). It is expected that—once the stably projectionless case is settled—the final result will classify all separable simple nuclear \({\mathcal {Z}}\)-stable C*-algebras satisfying the UCT by their Elliott invariants. This class of C*-algebras is what we refer to as "classifiable C*-algebras".
To complete these classification results, it is important to construct concrete models realizing all possible Elliott invariants by classifiable C*-algebras. Such models have been constructed—in the greatest possible generality—in [11] (see also [43] which covers special cases). In the stably finite unital case, the reader may also find such range results in [20], where the construction follows the ideas in [11] (with slight modifications, so that the models belong to the special class considered in [20]). In the stably projectionless case, models have been constructed in a slightly different way in [19] (again to belong to the special class of algebras considered) under the additional assumption of a trivial pairing between K-theory and traces.
Recently, the notion of Cartan subalgebras in C*-algebras [25, 36] has attracted attention, due to connections to topological dynamics [26, 27, 28] and the UCT question [3, 4]. In particular the reformulation of the UCT question in [3, 4] raises the following natural question (see [29, Question 5.9], [42, Question 16] and [5, Problems 1 and 2]):
Question 1.1
Which classifiable C*-algebras have Cartan subalgebras?
By [25, 36], we can equally well ask for groupoid models for classifiable C*-algebras. In the purely infinite case, groupoid models and hence Cartan subalgebras have been constructed in [41] (see also [29, § 5]). For special classes of stably finite unital C*-algebras, groupoid models have been constructed in [8, 35] using topological dynamical systems. Using a new approach, the goal of this paper is to answer Question 1.1 by constructing Cartan subalgebras in all the C*-algebra models constructed in [11, 19, 20], covering all classifiable stably finite C*-algebras, in particular in all classifiable unital C*-algebras. Generally speaking, Cartan subalgebras allow us to introduce ideas from geometry and dynamical systems to the study of C*-algebras. More concretely, in view of [3, 4], we expect that our answer to Question 1.1 will lead to progress on the UCT question.
The following two theorems are the main results of this paper. The reader may consult [25, 36] for the definition of twisted groupoids and their relation to Cartan subalgebras, and [38, § 2.2], [32, § 8.4], [18, 19, 20] for the precise definition of the Elliott invariant.
Theorem 1.2
(unital case) Given
a weakly unperforated, simple scaled ordered countable abelian group \((G_0,G_0^+,u)\),
a non-empty metrizable Choquet simplex T,
a surjective continuous affine map \(r: \, T \rightarrow S(G_0)\),
a countable abelian group \(G_1\),
there exists a twisted groupoid \((G,\Sigma )\) such that
G is a principal étale second countable locally compact Hausdorff groupoid,
\(C^*_r(G,\Sigma )\) is a simple unital C*-algebra which can be described as the inductive limit of subhomogeneous C*-algebras whose spectra have dimension at most 3,
the Elliott invariant of \(C^*_r(G,\Sigma )\) is given by
$$\begin{aligned}&(K_0(C^*_r(G,\Sigma )), K_0(C^*_r(G,\Sigma ))^+, [1_{C^*_r(G,\Sigma )}], T(C^*_r(G,\Sigma )), r_{C^*_r(G,\Sigma )},\\&K_1(C^*_r(G,\Sigma ))) \cong (G_0,G_0^+,u,T,r,G_1). \end{aligned}$$
(stably projectionless case) Given
countable abelian groups \(G_0\) and \(G_1\),
a homomorphism \({\rho }: \, G_0 \rightarrow \mathrm{Aff}(T)\) which is weakly unperforated in the sense that for all \(g \in G_0\), there is \(\tau \in T\) with \(\rho (g)(\tau ) = 0\)
\(C^*_r(G,\Sigma )\) is a simple stably projectionless C*-algebra with continuous scale in the sense of [18, 19, 30, 31] which can be described as the inductive limit of subhomogeneous C*-algebras whose spectra have dimension at most 3,
$$\begin{aligned}&(K_0(C^*_r(G,\Sigma )), K_0(C^*_r(G,\Sigma ))^+, T(C^*_r(G,\Sigma )), \rho _{C^*_r(G,\Sigma )},\\&K_1(C^*_r(G,\Sigma ))) \cong (G_0,\left\{ 0 \right\} ,T,\rho ,G_1). \end{aligned}$$
The condition on \(\rho \) means that the pairing between K-theory and traces is weakly unperforated, in the sense of [11]. It has been shown in [14, § A.1] that this condition of weak unperforation is necessary in the classifiable setting (i.e., it follows from finite nuclear dimension, or \({\mathcal {Z}}\)-stability).
It is worth pointing out that in the main theorems, the twisted groupoids are constructed explicitly in such a way that the inductive limit structure with subhomogeneous building blocks will become visible at the groupoid level.
Remark 1.4
The original building blocks in [11] have spectra with dimension at most two. The reason three-dimensional spectra are needed in this paper is because it is not clear how to realize all possible connecting maps at the level of \(K_1\) by Cartan-preserving homomomorphisms using the building blocks in [11]. Therefore, the building blocks have to be modified (see Sect. 3). Roughly speaking, the idea is to realize all possible connecting maps in \(K_1\) at the level of topological spaces. This however requires three-dimensional spectra because "nice" topological spaces (say CW-complexes) of dimension two or lower have torsion-free \(K^1\) (because cohomology is torsion-free in all odd degrees for these spaces). The dimension can be reduced to two if \(K_1\) is torsion-free (see Corollary 1.8 and Remark 3.9).
In particular, together with the classification results in [12, 20, 24, 34, 44], the groupoid models in [41], and [3, Theorem 3.1], we obtain the following
Corollary 1.5
A unital separable simple C*-algebra with finite nuclear dimension has a Cartan subalgebra if and only if it satisfies the UCT.
The only reason we restrict to the unital case here is that classification in the stably projectionless case has not been completed yet.
The constructions of the twisted groupoids in Theorems 1.2 and 1.3 yield the following direct consequences:
In the situation of Theorem 1.2, suppose that in addition to \((G_0, G_0^+, u)\), T, r and \(G_1\), we are given a topological cone \(\tilde{T}\) with base T and a lower semicontinuous affine map \(\tilde{\gamma }: \, \tilde{T} \rightarrow [0,\infty ]\). Then there exists a twisted groupoid \((\tilde{G},\tilde{\Sigma })\) such that
\(\tilde{G}\) is a principal étale second countable locally compact Hausdorff groupoid,
\(C^*_r(\tilde{G},\tilde{\Sigma })\) is a non-unital hereditary sub-C*-algebra of \(C^*_r(G,\Sigma ) \otimes {\mathcal {K}}\),
the Elliott invariant of \(C^*_r(\tilde{G},\tilde{\Sigma })\) is given by
$$\begin{aligned}&(K_0(C^*_r(\tilde{G},\tilde{\Sigma })), K_0(C^*_r(\tilde{G},\tilde{\Sigma }))^+, \tilde{T}(C^*_r(G,\Sigma )), \Sigma _{C^*_r(\tilde{G},\tilde{\Sigma })}, r_{C^*_r(\tilde{G},\tilde{\Sigma })},\\&K_1(C^*_r(\tilde{G},\tilde{\Sigma }))) \cong (G_0,G_0^+,\tilde{T},\tilde{\gamma },r,G_1). \end{aligned}$$
In the situation of Theorem 1.3, suppose that in addition to \(G_0\), \(G_1\), T and \(\rho \), we are given a topological cone \(\tilde{T}\) with base T and a lower semicontinuous affine map \(\tilde{\gamma }: \, \tilde{T} \rightarrow [0,\infty ]\). Then there exists a twisted groupoid \((\tilde{G},\tilde{\Sigma })\) such that
\(C^*_r(\tilde{G},\tilde{\Sigma })\) is a hereditary sub-C*-algebra of \(C^*_r(G,\Sigma ) \otimes {\mathcal {K}}\),
$$\begin{aligned}&(K_0(C^*_r(\tilde{G},\tilde{\Sigma })), K_0(C^*_r(\tilde{G},\tilde{\Sigma }))^+, \tilde{T}(C^*_r(\tilde{G},\tilde{\Sigma })), \Sigma _{C^*_r(\tilde{G},\tilde{\Sigma })}, \rho _{C^*_r(\tilde{G},\tilde{\Sigma })},\\&K_1(C^*_r(\tilde{G},\tilde{\Sigma }))\cong (G_0,\left\{ 0 \right\} ,\tilde{T},\tilde{\gamma },\rho ,G_1). \end{aligned}$$
Note that all the groupoids in Theorems 1.2, 1.3 and Corollaries 1.6, 1.7 are necessarily minimal and amenable. Theorem 1.2 and Corollary 1.6, together with [41], imply that every classifiable C*-algebra which is not stably projectionless has a Cartan subalgebra. Once the classification of stably projectionless C*-algebras is completed, Theorem 1.3 and Corollary 1.7 will imply that every classifiable stably projectionless C*-algebra has a Cartan subalgebra. Actually, using \({\mathcal {Z}}\)-stability, we see that all of the above-mentioned classifiable C*-algebras have infinitely many non-isomorphic Cartan subalgebras (compare [29, Proposition 5.1]). Moreover, the constructions in this paper show that in every classifiable stably finite C*-algebra, we can even find C*-diagonals (and even infinitely many non-isomorphic ones).
Moreover, more can be said about the twist, and also about the dimension of the spectra of our Cartan subalgebras.
The twisted groupoids \((G,\Sigma )\) constructed in the proofs of Theorems 1.2 and 1.3 have the following additional properties:
If \(G_0\) is torsion-free, then the twist \(\Sigma \) is trivial, i.e., \(\Sigma = {\mathbb {T}}\times G\).
If \(G_1\) has torsion, then \(C^*_r(G,\Sigma )\) is an inductive limit of subhomogeneous C*-algebras whose spectra are three-dimensional, and \(\mathrm{dim}\,(G^{(0)}) = 3\).
If \(G_1\) is torsion-free and \(G_0\) has torsion, \(C^*_r(G,\Sigma )\) is an inductive limit of subhomogeneous C*-algebras whose spectra are two-dimensional, and \(\mathrm{dim}\,(G^{(0)}) = 2\).
(iv)
If both \(G_0\) and \(G_1\) are torsion-free with \(G_1 \ncong \left\{ 0 \right\} \), then \(C^*_r(G,\Sigma )\) is an inductive limit of subhomogeneous C*-algebras whose spectra are one-dimensional, and \(\mathrm{dim}\,(G^{(0)}) = 1\).
If \(G_0\) is torsion-free and \(G_1 \cong \left\{ 0 \right\} \), then \(C^*_r(G,\Sigma )\) is an inductive limit of one-dimensional non-commutative finite CW-complexes, with \(\mathrm{dim}\,(G^{(0)}) \le 1\) in Theorem 1.2 and \(\mathrm{dim}\,(G^{(0)}) = 1\) in Theorem 1.3.
In particular, Corollary 1.8 implies the following:
The Jiang–Su algebra \({\mathcal {Z}}\), the Razak–Jacelon algebra \({\mathcal {W}}\) and the stably projectionless version \({\mathcal {Z}}_0\) of the Jiang–Su algebra of [19, Definition 7.1] have C*-diagonals with one-dimensional spectra. The corresponding twisted groupoids \((G,\Sigma )\) can be chosen so that \(\Sigma \) is trivial, i.e., \(\Sigma = {\mathbb {T}}\times G\).
Concrete groupoid models for \({\mathcal {Z}}\), \({\mathcal {W}}\) and \({\mathcal {Z}}_0\) are described in Sect. 8. It is worth pointing out that a groupoid model has been constructed for \({\mathcal {Z}}\) in [8] using a different construction (but the precise dimension of the unit space has not been determined in [8]). Moreover, G. Szabó and S. Vaes independently found groupoid models for \({\mathcal {W}}\), again using constructions different from ours. Furthermore, independently from [4] and the present paper, similar tools to the ones in [4, § 3] were developed in [2], which give rise to groupoid models for \({\mathcal {Z}}\) and \({\mathcal {W}}\) as well as other examples.
The key tool for all the results in this paper is an improved version of [4, Theorem 3.6], which allows us to construct Cartan subalgebras in inductive limit C*-algebras. The C*-algebraic formulation reads as follows.
Theorem 1.10
Let \((A_n,B_n)\) be Cartan pairs with normalizers \(N_n \mathrel {:=}N_{A_n}(B_n)\) and faithful conditional expectations \(P_n: \, A_n \twoheadrightarrow B_n\). Let \(\varphi _n: \, A_n \rightarrow A_{n+1}\) be injective *-homomorphisms with \(\varphi _n(B_n) \subseteq B_{n+1}\), \(\varphi _n(N_n) \subseteq N_{n+1}\) and \(P_{n+1} \circ \varphi _n = \varphi _n \circ P_n\) for all n. Then \(\varinjlim \left\{ B_n;\varphi _n \right\} \) is a Cartan subalgebra of \(\varinjlim \left\{ A_n;\varphi _n \right\} \).
If all \(B_n\) are C*-diagonals, then \(\varinjlim \left\{ B_n;\varphi _n \right\} \) is a C*-diagonal of \(\varinjlim \left\{ A_n;\varphi _n \right\} \).
A special case of this theorem is proved in [6].
Actually, in addition to Theorem 1.10, much more is accomplished: Groupoid models are developed for *-homomorphisms such as \(\varphi _n\), and the twisted groupoid corresponding to \(\left( \varinjlim \left\{ A_n;\varphi _n \right\} , \varinjlim \left\{ B_n;\varphi _n \right\} \right) \) as in Theorem 1.10 is described explicitly. These results (in Sect. 5) might be of independent interest.
Applications of these explicit descriptions of groupoid models (for homomorphisms and Cartan pairs) and Theorem 1.10 include a unified approach to Theorems 1.2, 1.3, and explicit constructions of the desired twisted groupoids. The strategy is as follows: C*-algebras with prescribed Elliott invariant have been constructed in [11] (see also [20, § 13] for the unital case). These C*-algebras have all the desired properties as in Theorems 1.2 and 1.3 and are constructed as inductive limits of subhomogeneous C*-algebras. However, the connecting maps in [11] and [20, § 13] do not preserve the canonical Cartan subalgebras in these building blocks in general. Therefore, a careful choice or modification of the building blocks and connecting maps in the constructions in [11, 20] is necessary in order to allow for an application of Theorem 1.10. The modification explained in Remark 4.1 is particularly important. Actually, a more general result is established in Sect. 4.2, where a class of inductive limits of subhomogeneous C*-algebras is identified, which encompasses all the C*-algebras in Theorems 1.2, 1.3 and Corollaries 1.6, 1.7, where we can apply Theorem 1.10.
I am grateful to the organizers Selçuk Barlak, Wojciech Szymański and Wilhelm Winter of the Oberwolfach Mini-Workshop "MASAs and Automorphisms of C*-Algebras" for inviting me, and for the discussions in Oberwolfach with Selçuk Barlak which eventually led to this paper. I also thank Selçuk Barlak and Gábor Szabó for helpful comments on earlier drafts. Moreover, I would like to thank the referee for very helpful comments which led to an improved version of Theorem 1.3 (previous versions of this theorem only covered classifiable stably projectionless C*-algebras with trivial pairing between K-theory and traces).
2 The constructions of Elliott and Gong–Lin–Niu
Let us briefly recall the constructions in [11] (see also [16] for simplifications and further explanations) and [20, § 13].
2.1 The unital case
Let us describe the construction in [20, § 13], which is based on [11] (with slight modifications). Given \((G_0,G_0^+,u,T,r,G_1)\) as in Theorem 1.2, write \(G = G_0\), \(K = G_1\), and let \(\rho : \, G \rightarrow \mathrm{Aff}(T)\) be the dual map of r. Choose a dense subgroup \(G' \subseteq \mathrm{Aff}(T)\). Set \(H \mathrel {:=}G \oplus G'\),
$$\begin{aligned} H^+ \mathrel {:=}\left\{ (0,0) \right\} \cup \left\{ (g,f) \in G \oplus G' \text {: }\rho (g)(\tau ) + f(\tau ) > 0 \ \forall \ \tau \in T \right\} , \end{aligned}$$
and view u in G as an element of H. Then \((H,H^+,u)\) becomes a simple ordered group, inducing the structure of a dimension group on \(H / \mathrm{Tor}(H)\). Now construct a commutative diagram
\(H_n\) is a finitely generated abelian group with \(H_n = \bigoplus _i H_n^i\), where for one distinguished index \(\varvec{i}\), \(H_n^{\varvec{i}} = {\mathbb {Z}}\oplus \mathrm{Tor}(H_n)\), and for all other indices, \(H_n^i = {\mathbb {Z}}\);
with \((H_n^{\varvec{i}})^+ \mathrel {:=}\left\{ (0,0) \right\} \cup ({\mathbb {Z}}_{>0} \oplus \mathrm{Tor}(H_n))\), \((H_n^i)^+ \mathrel {:=}{\mathbb {Z}}_{\ge 0}\) for all \(i \ne \varvec{i}\), \(H_n^+ \mathrel {:=}\bigoplus _i (H_n^i)^+ \subseteq H_n^{\varvec{i}} \oplus \bigoplus _{i \ne \varvec{i}} H_n^i = H_n\) and \(u_n = (([n,\varvec{i}],\tau _n),([n,i])_{i \ne \varvec{i}}) \in H_n^+\), we have
$$\begin{aligned} \varinjlim \left\{ (H_n,H_n^+,u_n); \gamma _n \right\} \cong (H,H^+,u); \end{aligned}$$
with \(G_n \mathrel {:=}(\gamma _n^{\infty })^{-1}(G)\), where \(\gamma _n^{\infty }: \, H_n \rightarrow H\) is the map provided by (1), and \(G_n^+ \mathrel {:=}G_n \cap H_n^+\), we have \(u_n \in G_n \subseteq H_n\), and (1) induces \(\varinjlim \left\{ (G_n,G_n^+,u_n); \gamma _n \right\} \cong (G,G^+,u)\);
the vertical maps are the canonical ones.
Let \({\hat{\gamma }}_n: \, H_n / \mathrm{Tor}(H_n) \mathrel {=:}{\hat{H}}_n = \bigoplus _i {\hat{H}}_n^i \rightarrow \bigoplus _j {\hat{H}}_{n+1}^j = {\hat{H}}_{n+1} \mathrel {:=}H_{n+1} / \mathrm{Tor}(H_{n+1})\) be the homomorphism induced by \(\gamma _n\), where \({\hat{H}}_n^i = {\mathbb {Z}}= {\hat{H}}_{n+1}^j\) for all i and j. For fixed n, the map \({\hat{\gamma }} = {\hat{\gamma }}_n\) is given by a matrix \(({\hat{\gamma }}_{ji})\), where we can always assume that \({\hat{\gamma }}_{ji} \in {\mathbb {Z}}_{>0}\) (considered as a map \({\hat{H}}_n^i = {\mathbb {Z}}\rightarrow {\mathbb {Z}}= {\hat{H}}_{n+1}^j\)). Then \(\gamma _n = {\hat{\gamma }} + \tau + t\) for homomorphisms \(\tau : \, \mathrm{Tor}(H_n) \rightarrow \mathrm{Tor}(H_{n+1})\) and \(t: \, {\hat{H}}_n \rightarrow \mathrm{Tor}(H_{n+1})\). Here we think of \({\hat{H}}_n\) as a subgroup (actually a direct summand) of \(H_n\). As explained in [20, § 6], given a positive constant \(\Gamma _n\) depending on n, we can always arrange that
$$\begin{aligned} ({\hat{\gamma }}_n)_{ji} \ge \Gamma _n \ \mathrm{for} \ \mathrm{all} \ i \ \mathrm{and} \ j. \end{aligned}$$
Also, let \(K_n\) be finitely generated abelian groups and \(\chi _n: \, K_n \rightarrow K_{n+1}\) homomorphisms such that \(K \cong \varinjlim \left\{ K_n; \chi _n \right\} \).
Let \(F_n = \bigoplus _i F_n^i\) be C*-algebras, where \(F_n^{\varvec{i}}\) is a homogeneous C*-algebra of the form \(F_n^{\varvec{i}} = P_n^{\varvec{i}} M_{\infty }(C(Z_n^{\varvec{i}})) P_n^{\varvec{i}}\) for a connected compact space \(Z_n^{\varvec{i}}\) with base point \(\theta _n^{\varvec{i}}\) and a projection \(P_n^{\varvec{i}} \in M_{\infty }(C(Z_n^{\varvec{i}}))\), while for all other indices \(i \ne \varvec{i}\), \(F_n^i\) is a matrix algebra, \(F_n^i = M_{[n,i]}\). We require that \((K_0(F_n^{\varvec{i}}), K_0(F_n^{\varvec{i}})^+,[1_{F_n^{\varvec{i}}}]) \cong (H_n^{\varvec{i}},(H_n^{{\mathbf {i}}})^+,([n,\varvec{i}],\tau _n))\) and \(K_1(F_n^{\varvec{i}}) \cong K_n\), so that \((K_0(F_n), K_0(F_n)^+,[1_{F_n}], K_1(F_n)) \cong (H_n,H_n^+,u_n,K_n)\).
Let \(\psi _n\) be a unital homomorphism \(F_n \rightarrow F_{n+1}\) which induces \(\gamma _n\) in \(K_0\) and \(\chi _n\) in \(K_1\). We write \(F_n = P_n C(Z_n) P_n\) where \(Z_n = Z_n^{\varvec{i}} \amalg \coprod _{i \ne \varvec{i}} \{ \theta _n^i \}\), and \(P_n = (P_n^{\varvec{i}},(1_{[n,i]})_{i \ne \varvec{i}}) \in M_{\infty }(C(Z_n^{\varvec{i}})) \oplus \bigoplus _{i \ne \varvec{i}} M_{[n,i]}(C(\{ \theta _n^i \}))\). Thus evaluation in \(\theta _n^i\) induces a quotient map \(\pi _n: \, F_n \rightarrow {\hat{F}}_n \mathrel {:=}\bigoplus _i {\hat{F}}_n^i\), where \({\hat{F}}_n^i = M_{[n,i]}\). We require that \(\psi _n\) induce homomorphisms \({\hat{\psi }}_n: \, {\hat{F}}_n \rightarrow {\hat{F}}_{n+1}\) so that we obtain a commutative diagram
which induces in \(K_0\)
where the vertical arrows are the canonical projections. As \(\mathrm{Tor}(H_n) \subseteq G_n\), \(H_n / G_n\) is torsion-free, and there is a canonical projection \(H_n / \mathrm{Tor}(H_n) \rightarrow H_n / G_n\). Now let \(E_n \mathrel {:=}\bigoplus _p E_n^p\), \(E_n^p = M_{\left\{ n,p \right\} }\), so that \(K_0(E_n) \cong H_n / G_n\), and for fixed n, let \(\beta _0, \, \beta _1: \, {\hat{F}}_n \rightarrow E_n\) be unital homomorphisms which yield the commutative diagram
We can assume \(\beta _0 \oplus \beta _1: \, {\hat{F}}_n \rightarrow E_n \oplus E_n\) to be injective, because only the difference \((\beta _0)_* - (\beta _1)_*\) matters.
$$\begin{aligned}&A_n \mathrel {:=}\left\{ (f,a) \in C([0,1],E_n) \oplus F_n \text {: }f(t) = \beta _t(\pi _n(a)) \ \mathrm{for} \ t = 0,1 \right\} ,\\&{\hat{A}}_n \mathrel {:=}\left\{ (f,{\hat{a}}) \in C([0,1],E_n) \oplus {\hat{F}}_n \text {: }f(t) = \beta _t({\hat{a}}) \ \mathrm{for} \ t = 0,1 \right\} . \end{aligned}$$
As \(\beta _0 \oplus \beta _1\) is injective, we view \({\hat{A}}_n\) as a subalgebra of \(C([0,1],E_n)\) via \((f,{\hat{a}}) \mapsto f\).
Choose for each n a unital homomorphism \({\hat{\varphi }}_n: \, {\hat{A}}_n \rightarrow {\hat{A}}_{n+1}\) such that the composition with the map \(C([0,1],E_{n+1}) \twoheadrightarrow C([0,1],E_{n+1}^q)\) induced by the canonical projection \(E_{n+1} \twoheadrightarrow E_{n+1}^q\),
$$\begin{aligned} {\hat{A}}_n \overset{{\hat{\varphi }}_n}{\longrightarrow } {\hat{A}}_{n+1} \hookrightarrow C([0,1],E_{n+1}) \twoheadrightarrow C([0,1],E_{n+1}^q), \end{aligned}$$
is of the form
$$\begin{aligned} C([0,1],E_n) \supseteq {\hat{A}}_n \ni f \mapsto u^* \begin{pmatrix} V(f) &{} \\ &{} D(f) \end{pmatrix} u, \end{aligned}$$
where u is a continuous path of unitaries \([0,1] \rightarrow U(E_{n+1}^q)\),
$$\begin{aligned} V(f) = \begin{pmatrix} \pi _1(f) &{} &{} \\ &{} \pi _2(f) &{} \\ &{} &{} \ddots \end{pmatrix} \end{aligned}$$
for some \(\pi _{\bullet }\) of the form \(\pi _{\bullet }: \, {\hat{A}}_n \rightarrow {\hat{F}}_n \twoheadrightarrow {\hat{F}}_n^i\), where the first map is given by \((f,{\hat{a}}) \mapsto {\hat{a}}\) and the second map is the canonical projection, and
$$\begin{aligned} D(f) = \begin{pmatrix} f \circ \lambda _1 &{} &{} \\ &{} f \circ \lambda _2 &{} \\ &{} &{} \ddots \end{pmatrix} \end{aligned}$$
for some continuous maps \(\lambda _{\bullet }: \, [0,1] \rightarrow [0,1]\) with \(\lambda _{\bullet }^{-1}(\left\{ 0,1 \right\} ) \subseteq \left\{ 0,1 \right\} \). We require that the diagram
commute, where the vertical maps are given by \((f,{\hat{a}}) \mapsto {\hat{a}}\).
Then there exists a unique homomorphism \({\varphi _n}: \, A_n \rightarrow A_{n+1}\) which fits into the commutative diagram
where all the unlabelled arrows are given by the canonical maps.
By construction, \(\varinjlim \left\{ A_n;\varphi _n \right\} \) has the desired Elliott invariant (in particular, the canonical map \(\varinjlim \left\{ A_n;\varphi _n \right\} \rightarrow {\hat{F}} \mathrel {:=}\varinjlim \left\{ {\hat{F}}_n; {\hat{\psi }}_n \right\} \) induces \(T(\varinjlim \left\{ A_n;\varphi _n \right\} ) \cong T({\hat{F}})\)). However, this is not a simple C*-algebra. Thus a further modification is needed to enforce simplicity. To this end, choose \(\varvec{I}_n \subseteq (0,1)\) and \(\varvec{Z}_n^{\varvec{i}} \subseteq Z_n^{\varvec{i}}\)\(\frac{1}{n}\)-dense and replace \(\varphi _n: \, A_n \rightarrow A_{n+1}\) by the unital homomorphism \(\xi _n: \, A_n \rightarrow A_{n+1}\) such that:
the compositions
$$\begin{aligned}&A_n \overset{\xi _n}{\longrightarrow } A_{n+1} \rightarrow F_{n+1} \twoheadrightarrow F_{n+1}^j\ \ \ \mathrm{and} \\&A_n \overset{\varphi _n}{\longrightarrow } A_{n+1} \rightarrow F_{n+1} \twoheadrightarrow F_{n+1}^j \ \text {coincide except for one index} \ j_{\xi } \ne \varvec{j}; \end{aligned}$$
the composition
$$\begin{aligned} A_n \overset{\xi _n}{\longrightarrow } A_{n+1} \rightarrow F_{n+1} \twoheadrightarrow F_{n+1}^{j_{\xi }} \end{aligned}$$
$$\begin{aligned} A_n \ni (f,a) \mapsto u^* \begin{pmatrix} I(f) &{} &{} \\ &{} Z(a) &{} \\ &{} &{} P(a) \end{pmatrix} u, \end{aligned}$$
where u is a permutation matrix in \(M_{[n+1,j_{\xi }]}\),
$$\begin{aligned} I(f) = \begin{pmatrix} f^{p_1}(t_1) &{} &{} \\ &{} f^{p_2}(t_2) &{} \\ &{} &{} \ddots \end{pmatrix} \end{aligned}$$
for indices \(p_{\bullet }\) and \(t_{\bullet } \in \varvec{I}_n\) such that all possible pairs \(p_{\bullet }, t_{\bullet }\) appear (\(f^p\) is the component of f in \(C([0,1],E_n^p)\)),
$$\begin{aligned} Z(a) = \begin{pmatrix} \tau _1(a(z_1)) &{} &{} \\ &{} \tau _2(a(z_2)) &{} \\ &{} &{} \ddots \end{pmatrix} \end{aligned}$$
for \(z_{\bullet } \in \varvec{Z}_n\) and isomorphisms \(\tau _{\bullet }: \, P_n^{\varvec{i}}(z_{\bullet })M_{\infty }P_n^{\varvec{i}}(z_{\bullet }) \cong {\hat{F}}_n^{\varvec{i}} = M_{[n,\varvec{i}]}\), and
$$\begin{aligned} P(a) = \begin{pmatrix} \pi _n^{i_1}(a) &{} &{} \\ &{} \pi _n^{i_2}(a) &{} \\ &{} &{} \ddots \end{pmatrix}, \end{aligned}$$
where \(\pi _n^i\) is the canonical projection \(F_n \twoheadrightarrow {\hat{F}}_n \twoheadrightarrow {\hat{F}}_n^i\);
for every q, the composition
$$\begin{aligned} A_n \overset{\xi _n}{\longrightarrow } A_{n+1} \rightarrow C([0,1],E_{n+1}) \twoheadrightarrow C([0,1],E_{n+1}^q) \end{aligned}$$
$$\begin{aligned} A_n \ni (f,a) \mapsto u^* \begin{pmatrix} \Phi (f) &{} \\ &{} \Xi (a) \end{pmatrix} u, \end{aligned}$$
where u is a continuous path of unitaries \([0,1] \rightarrow U(E_{n+1}^q)\), \(\Phi (f)\) is of the same form
$$\begin{aligned} \begin{pmatrix} V(f) &{} \\ &{} D(f) \end{pmatrix} \end{aligned}$$
as in (3),
$$\begin{aligned} \Xi (a)(t) = \begin{pmatrix} \tau _1(t)(a(z_1(t))) &{} &{} \\ &{} \tau _2(t)(a(z_2(t))) &{} \\ &{} &{} \ddots \end{pmatrix} \end{aligned}$$
for continuous maps \(z_{\bullet }: \, [0,1] \rightarrow Z_n^{\varvec{i}}\), each of which is either a constant map with value in \(\varvec{Z}_n\) or connects \(\theta _n^{\varvec{i}}\) with \(z_{\bullet } \in \varvec{Z}_n\), and isomorphisms \(\tau _{\bullet }(t): \, P_n^{\varvec{i}}(z_{\bullet }(t)) M_{\infty }P_n^{\varvec{i}}(z_{\bullet }(t)) \cong {\hat{F}}_n^{\varvec{i}}\) depending continuously on \(t \in [0,1]\) such that for \(t \in \left\{ 0,1 \right\} \), \(\tau _{\bullet }(t) = \mathrm{id}\) if \(z_{\bullet }(t) = \theta _n^{\varvec{i}}\) and \(\tau _{\bullet }(t) = \tau _{\bullet }\) if \(z_{\bullet }(t) = z_{\bullet }\), where \(\tau _{\bullet }\) is as in (4).
Then \(\varinjlim \left\{ A_n; \xi _n \right\} \) is a simple unital C*-algebra with prescribed Elliott invariant.
2.2 The stably projectionless case
We follow [11] (see also [16]), with slight modifications as in the unital case. Let \((G_0,T,\rho ,G_1)\) be as in Theorem 1.3.
Write \(G = G_0\) and \(K = G_1\). Choose a dense subgroup \(G' \subseteq \mathrm{Aff}(T)\). Set \(H \mathrel {:=}G \oplus G'\),
$$\begin{aligned} H^+ \mathrel {:=}\left\{ (0,0) \right\} \cup \left\{ (g,f) \in G \oplus G' \text {: }\rho (g)(\tau ) + f(\tau ) > 0 \ \forall \ \tau \in T \right\} . \end{aligned}$$
Then \((H,H^+)\) becomes a simple ordered group, inducing the structure of a dimension group on \(H / \mathrm{Tor}(H)\). Now construct a commutative diagram
with \((H_n^{\varvec{i}})^+ \mathrel {:=}\left\{ (0,0) \right\} \cup ({\mathbb {Z}}_{>0} \oplus \mathrm{Tor}(H_n))\), \((H_n^i)^+ \mathrel {:=}{\mathbb {Z}}_{\ge 0}\) for all \(i \ne \varvec{i}\), and \(H_n^+ \mathrel {:=}\bigoplus _i (H_n^i)^+ \subseteq H_n^{\varvec{i}} \oplus \bigoplus _{i \ne \varvec{i}} H_n^i = H_n\) we have
$$\begin{aligned} \varinjlim \left\{ (H_n,H_n^+); \gamma _n \right\} \cong (H,H^+); \end{aligned}$$
with \(G_n \mathrel {:=}(\gamma _n^{\infty })^{-1}(G)\), where \(\gamma _n^{\infty }: \, H_n \rightarrow H\) is the map provided by (5), we have \(G_n \cap H_n^+ = \left\{ 0 \right\} \), and (1) induces \(\varinjlim \left\{ G_n; \gamma _n \right\} \cong G\);
Let \({\hat{\gamma }}_n: \, H_n / \mathrm{Tor}(H_n) \mathrel {=:}{\hat{H}}_n = \bigoplus _i {\hat{H}}_n^i \rightarrow \bigoplus _j {\hat{H}}_{n+1}^j = {\hat{H}}_{n+1} \mathrel {:=}H_{n+1} / \mathrm{Tor}(H_{n+1})\) be the homomorphism induced by \(\gamma _n\), where \({\hat{H}}_n^i = {\mathbb {Z}}= {\hat{H}}_{n+1}^j\) for all i and j. For fixed n, the map \({\hat{\gamma }} = {\hat{\gamma }}_n\) is given by a matrix \(({\hat{\gamma }}_{ji})\), where we can always assume that \({\hat{\gamma }}_{ji} \in {\mathbb {Z}}_{>0}\) (considered as a map \({\hat{H}}_n^i = {\mathbb {Z}}\rightarrow {\mathbb {Z}}= {\hat{H}}_{n+1}^j\)). Then \(\gamma _n = {\hat{\gamma }} + \tau + t\) for homomorphisms \(\tau : \, \mathrm{Tor}(H_n) \rightarrow \mathrm{Tor}(H_{n+1})\) and \(t: \, {\hat{H}}_n \rightarrow \mathrm{Tor}(H_{n+1})\). Here we think of \({\hat{H}}_n\) as a subgroup of \(H_n\). As in the unital case (see [20, § 6]), given a positive constant \(\Gamma _n\) depending on n, we can always arrange that
Let \(F_n = \bigoplus _i F_n^i\) be C*-algebras, where \(F_n^{\varvec{i}}\) is a homogeneous C*-algebra of the form \(F_n^{\varvec{i}} = P_n^{\varvec{i}} M_{\infty }(C(Z_n^{\varvec{i}})) P_n^{\varvec{i}}\) for a connected compact space \(Z_n^{\varvec{i}}\) with base point \(\theta _n^{\varvec{i}}\) and a projection \(P_n^{\varvec{i}} \in M_{\infty }(C(Z_n^{\varvec{i}}))\), while for all other indices \(i \ne \varvec{i}\), \(F_n^i\) is a matrix algebra, \(F_n^i = M_{[n,i]}\). We require that \((K_0(F_n^{\varvec{i}}), K_0(F_n^{\varvec{i}})^+) \cong (H_n^{\varvec{i}},(H_n^{{\mathbf {i}}})^+)\) and \(K_1(F_n^{\varvec{i}}) \cong K_n\), so that \((K_0(F_n), K_0(F_n)^+, K_1(F_n)) \cong (H_n,H_n^+,K_n)\).
Let \(\psi _n\) be a unital homomorphism \(F_n \rightarrow F_{n+1}\) which induces \(\gamma _n\) in \(K_0\) and \(\chi _n\) in \(K_1\). We write \(F_n = P_n C(Z_n) P_n\) where \(Z_n = Z_n^{\varvec{i}} \amalg \coprod _{i \ne \varvec{i}} \{ \theta _n^i \}\), and \(P_n = (P_n^{\varvec{i}},(1_{[n,i]})_{i \ne \varvec{i}}) \in M_{\infty }(C(Z_n^{\varvec{i}})) \oplus \bigoplus _{i \ne \varvec{i}} M_{[n,i]}(C(\{ \theta _n^i \}))\). Thus, evaluation in \(\theta _n^i\) induces a quotient map \(\pi _n: \, F_n \rightarrow {\hat{F}}_n \mathrel {:=}\bigoplus _i {\hat{F}}_n^i\), where \({\hat{F}}_n^i = M_{[n,i]}\). We require that \(\psi _n\) induce homomorphisms \({\hat{\psi }}_n: \, {\hat{F}}_n \rightarrow {\hat{F}}_{n+1}\) so that we obtain a commutative diagram
where the vertical arrows are the canonical projections. As \(\mathrm{Tor}(H_n) \subseteq G_n\), \(H_n / G_n\) is torsion-free, and there is a canonical projection \(H_n / \mathrm{Tor}(H_n) \rightarrow H_n / G_n\). Now let \(E_n \mathrel {:=}\bigoplus _p E_n^p\), \(E_n^p = M_{\left\{ n,p \right\} }\), such that \(K_0(E_n) \cong H_n / G_n\), and for fixed n, let \(\beta _0, \, \beta _1: \, {\hat{F}}_n \rightarrow E_n\) be (necessarily non-unital) homomorphisms which yield the commutative diagram
As in the unital case, we may assume \(\beta _0 \oplus \beta _1: \, {\hat{F}}_n \rightarrow E_n \oplus E_n\) to be injective.
Choose for each n a homomorphism \({\hat{\varphi }}_n: \, {\hat{A}}_n \rightarrow {\hat{A}}_{n+1}\) such that the composition with the map \(C([0,1],E_{n+1}) \twoheadrightarrow C([0,1],E_{n+1}^q)\) induced by the canonical projection \(E_{n+1} \twoheadrightarrow E_{n+1}^q\),
for some continuous maps \(\lambda _{\bullet }: \, [0,1] \rightarrow [0,1]\) with \(\lambda _{\bullet }^{-1}(\left\{ 0,1 \right\} ) \subseteq \left\{ 0,1 \right\} \). We require that
commutes, where the vertical maps are given by \((f,{\hat{a}}) \mapsto {\hat{a}}\).
Then there exists a unique homomorphism \(\varphi _n: \, A_n \rightarrow A_{n+1}\) which fits into the commutative diagram
By construction, \(\varinjlim \left\{ A_n;\varphi _n \right\} \) has the desired Elliott invariant (the details are as in the unital case, see [20, § 13]). The same modification as in the unital case produces new connecting maps \(\xi _n: \, A_n \rightarrow A_{n+1}\) such that \(\varinjlim \left\{ A_n;\xi _n \right\} \) is a simple (stably projectionless) C*-algebra with prescribed Elliott invariant. Moreover, choosing \(\xi _n\) with the property that strictly positive elements are sent to strictly positive elements, \(\varinjlim \left\{ A_n;\xi _n \right\} \) will have continuous scale by [18, Theorem 9.3] (compare also [19, § 6]). In addition, we choose \(\xi _n\) such that full elements are sent to full elements.
In an earlier version of this paper, we modified the construction in [19, § 6] instead, which covers all Elliott invariants for stably projectionless C*-algebras with trivial pairing between K-theory and traces (\(\rho = 0\)). I would like to thank the referee for pointing out that [11] (see also [16]) describes a general construction exhausting all possible Elliott invariants with weakly unperforated pairing between K-theory and traces (in the stably projectionless case, this is precisely the condition that \(\rho \) is weakly unperforated as in Theorem 1.3).
3 Concrete construction of AH-algebras
We start with the following standard fact.
Lemma 3.1
Given an integer \(N > 1\), let \(\mu _N: \, S^1 \rightarrow S^1, \, z \mapsto z^N\), and set \(X_N \mathrel {:=}D^2 \cup _{\mu _N} S^1\), where we identify \(z \in S^1 = \partial D^2\) with \(\mu _N(z) \in S^1\). Then
$$\begin{aligned} H^{\bullet }(X_N) \cong {\left\{ \begin{array}{ll} {\mathbb {Z}}&{} \mathrm{if} \ \bullet = 0;\\ {\mathbb {Z}}/ N &{} \mathrm{if} \ \bullet = 2;\\ \left\{ 0 \right\} &{} \mathrm{else}. \end{array}\right. } \end{aligned}$$
Moreover, \((K_0(C(X_N)),K_0(C(X_N))^+,[1_{C(X_N)}],K_1(C(X_N))) \cong ({\mathbb {Z}}\oplus {\mathbb {Z}}/ N, \left\{ (0,0) \right\} \cup ({\mathbb {Z}}_{>0} \oplus {\mathbb {Z}}/ N), (1,0), \left\{ 0 \right\} )\).
In the following, we view \(S^2\) as the one point compactification of \(\mathring{D}^2\), \(S^2 = \mathring{D}^2 \cup \left\{ \infty \right\} \).
Let \(X_N \twoheadrightarrow S^2\) be the continuous map sending \(\mathring{D}^2 \subseteq D^2\) identically to \(\mathring{D}^2 \subseteq S^2\), \(\partial D^2\) to \(\infty \) and \(S^1\) to \(\infty \). Let \(p_{X_N}\) be the pullback of the Bott line bundle on \(S^2\) (see for instance [39, § 6.2]) to \(X_N\) via this map. We view \(p_{X_N}\) as a projection in \(M_2(C(X_N))\). Then there is an isomorphism \(K_0(C(X_N)) \cong {\mathbb {Z}}\oplus {\mathbb {Z}}/ N\) identifying the class of \(1_{C(X_N)}\) with the generator of \({\mathbb {Z}}\) and the class of \(p_{X_N}\) with (1, 1).
Just analyse the K-theory exact sequence attached to \(0 \rightarrow C_0(\mathring{D}^2) \rightarrow C(X_N) \rightarrow C(S^1) \rightarrow 0\). \(\square \)
We recall another standard fact.
Given an integer \(N > 1\), let \(Y_N \mathrel {:=}\Sigma X_N \cong D^3 \cup _{\Sigma \mu _N} S^2\), where we identify \(z \in S^2 = \partial D^3 \cong \Sigma S^1\) with \((\Sigma \mu _N)(z) \in \Sigma S^1 \cong S^2\). (Here \(\Sigma \) stands for suspension.) Then
$$\begin{aligned} H^{\bullet }(Y_N) \cong {\left\{ \begin{array}{ll} {\mathbb {Z}}&{} \mathrm{if} \ \bullet = 0;\\ {\mathbb {Z}}/ N &{} \mathrm{if} \ \bullet = 3;\\ \left\{ 0 \right\} &{} \mathrm{else}. \end{array}\right. } \end{aligned}$$
Moreover, \(K_0(C(Y_N)) = {\mathbb {Z}}[1_{C(Y_N)}]\) and \(K_1(C(Y_N)) \cong {\mathbb {Z}}/ N\).
Let \(Y_N \twoheadrightarrow S^3\) be the continuous map sending \(\mathring{D}^3 \subseteq D^3\) identically to \(\mathring{D}^3 \subseteq S^3\), \(\partial D^3\) to \(\infty \) and \(S^2\) to \(\infty \). Then the dual map \(C(S^3) \rightarrow C(Y_N)\) induces in \(K_1\) a surjection \(K_1(C(S^3)) \cong {\mathbb {Z}}\twoheadrightarrow {\mathbb {Z}}/ N \cong K_1(C(Y_N))\).
Just analyse the K-theory exact sequence attached to \(0 \rightarrow C_0(\mathring{D}^3) \rightarrow C(Y_N) \rightarrow C(S^2) \rightarrow 0\).
Analysing K-theory exact sequences, the following is a straightforward observation.
Let \(N, N' \in {\mathbb {Z}}_{>1}\) and \(m \in {\mathbb {Z}}_{>0}\) with \(N' \mid m \cdot N\), say \(m \cdot N = m' \cdot N'\). Define a continuous map
$$\begin{aligned} \Psi _m^*: \, X_{N'} = D^2 \cup _{\mu _{N'}} S^1 \rightarrow D^2 \cup _{\mu _N} S^1 = X_N \end{aligned}$$
by sending \(x \in D^2\) to \(x^m \in D^2\) and \(z \in S^1\) to \(z^{m'} \in S^1\). Then the dual map \(\Psi _m: \, C(X_N) \rightarrow C(X_{N'})\) induces in \(K_0\) the homomorphism
$$\begin{aligned} K_0(C(X_N)) \cong {\mathbb {Z}}\oplus {\mathbb {Z}}/ N \overset{ \left( {\begin{matrix} 1 &{} 0 \\ 0 &{} m \end{matrix}} \right) }{\longrightarrow } {\mathbb {Z}}\oplus {\mathbb {Z}}/ N' \cong K_0(C(X_{N'})). \end{aligned}$$
Naturality of suspension yields
Let \(N, N' \in {\mathbb {Z}}_{>1}\) and \(m \in {\mathbb {Z}}_{>0}\) with \(N' \mid m \cdot N\), say \(m \cdot N = m' \cdot N'\). Let \(\Sigma \Psi _m: \, C(Y_N) \rightarrow C(Y_{N'})\) be the map dual to \(\Sigma \Psi _m^*: \, Y_{N'} \cong \Sigma X_{N'} \rightarrow \Sigma X_N \cong Y_N\). Then \(\Sigma \Psi _m\) induces in \(K_1\) the homomorphism
$$\begin{aligned} K_1(C(Y_N)) \cong {\mathbb {Z}}/ N \overset{m}{\longrightarrow } {\mathbb {Z}}/ N' \cong K_1(C(Y_{N'})). \end{aligned}$$
In the following, we view \(X_N\) and \(Y_N\) as pointed spaces, with base point \(1 = (1,0) \in S^1 = \partial D^2 \subseteq D^2\) in \(X_N\) and base point \((1,0,0) \in S^2 = \partial D^3 \subseteq D^3\) in \(Y_N\). Note that \(\Psi _m\) and \(\Sigma \Psi _m\) preserve base points. Moreover, if \(\theta \) denotes the base point of \(X_N\), then the projection \(p_{X_N}\) in Lemma 3.2 satisfies
$$\begin{aligned} p_{X_N}(\theta ) = \left( {\begin{matrix} 1 &{} 0 \\ 0 &{} 0 \end{matrix}} \right) . \end{aligned}$$
Now let \(H_n = H_n^{\varvec{i}} \oplus \bigoplus _{i \ne \varvec{i}} H_n^i\), \(H_{n+1} = H_{n+1}^{\varvec{j}} \oplus \bigoplus _{j \ne \varvec{j}} H_{n+1}^j\) be abelian groups with \(H_n^{\varvec{i}} = {\mathbb {Z}}\oplus T_n\), \(H_{n+1}^{\varvec{j}} = {\mathbb {Z}}\oplus T_{n+1}\) for finitely generated torsion groups \(T_n\), \(T_{n+1}\), and \(H_n^i = {\mathbb {Z}}\), \(H_{n+1}^j = {\mathbb {Z}}\) for all \(i \ne \varvec{i}\), \(j \ne \varvec{j}\). Let \((H_n^{\varvec{i}})^+ \mathrel {:=}\left\{ (0,0) \right\} \cup ({\mathbb {Z}}_{>0} \oplus T_n)\), \((H_n^i)^+ \mathrel {:=}{\mathbb {Z}}_{\ge 0}\) for all \(i \ne \varvec{i}\), \(H_n^+ \mathrel {:=}\bigoplus _i (H_n^i)^+ \subseteq H_n^{\varvec{i}} \oplus \bigoplus _{i \ne \varvec{i}} H_n^i = H_n\) and \(u_n = (([n,\varvec{i}],\tau _n),([n,i])_{i \ne \varvec{i}}) \in H_n^+\). Similarly, define \((H_{n+1}^j)^+\), \(H_{n+1}^+ \mathrel {:=}\bigoplus _i (H_{n+1}^j)^+\), and let \(u_{n+1} = (([n+1,\varvec{j}],\tau _{n+1}),([n+1,j])_{j \ne \varvec{j}}) \in H_{n+1}^+\). Let \(T_n = \bigoplus _k T_n^k\), where \(T_n^k = {\mathbb {Z}}/ N_n^k\), and \(T_{n+1} = \bigoplus _k T_{n+1}^l\), where \(T_{n+1}^l = {\mathbb {Z}}/ N_{n+1}^l\). Let \(\gamma _n: \, H_n \rightarrow H_{n+1}\) be a homomorphism with \(\gamma _n(u_n) = u_{n+1}\). (In the stably projectionless case, these order units are not part of the given data, but we can always choose such order units.) Let us fix n, and suppose that \(\gamma = \gamma _n\) induces a homomorphism \({\hat{\gamma }}: \, H_n / \mathrm{Tor}(H_n) = {\hat{H}}_n = \bigoplus _i {\hat{H}}_n^i \rightarrow \bigoplus _j {\hat{H}}_{n+1}^j = {\hat{H}}_{n+1} = H_{n+1} / \mathrm{Tor}(H_{n+1})\), where \({\hat{H}}_n^i = {\mathbb {Z}}= {\hat{H}}_{n+1}^j\) for all i and j. Viewing \({\hat{H}}_n\) as a subgroup (actually a direct summand) of \(H_n\), we obtain that \(\gamma _n = {\hat{\gamma }} + \tau + t\) for homomorphisms \(\tau : \, \mathrm{Tor}(H_n) \rightarrow \mathrm{Tor}(H_{n+1})\) and \(t: \, {\hat{H}}_n \rightarrow \mathrm{Tor}(H_{n+1})\). \({\hat{\gamma }}\) is given by an integer matrix \(({\hat{\gamma }}_{ji})\). Similarly, \(\tau \) is given by an integer matrix \((\tau _{lk})\), where we view \(\tau _{lk}\) as a homomorphism \(T_n^k \rightarrow T_{n+1}^l\). Also, t is given by an integer matrix \((t_{li})\), where we view \(t_{li}\) as a homomorphism \(H_n^i \rightarrow T_{n+1}^l\). Clearly, we can always arrange \(\tau _{lk}, t_{li} > 0\) for all l, k, i, and because of (2) and (6), we can also arrange
$$\begin{aligned} {\hat{\gamma }}_{ji} > 0 \ \mathrm{and} \ {\hat{\gamma }}_{\varvec{j} \varvec{i}} \ge \#_0(k) + 1. \end{aligned}$$
Here \(\#_0(k)\) is the number of direct summands in \(T_n\) (i.e., the number of indices k).
We have the following direct consequence of Lemma 3.1.
Let \(X_n^{\varvec{i}} \mathrel {:=}\bigvee _k X_{N_n^k}\), where we take the wedge sum with respect to the base points of the individual \(X_{N_n^k}\). Denote the base point of \(X_n^{\varvec{i}}\) by \(\theta _n^{\varvec{i}}\). Set \(X_n \mathrel {:=}X_n^{\varvec{i}} \amalg \coprod _{i \ne \varvec{i}} \{ \theta _n^i \}\). Then
$$\begin{aligned} (K_0(C(X_n)),K_0(C(X_n))^+,K_1(C(X_n))) \cong (H_n,H_n^+,\left\{ 0 \right\} ). \end{aligned}$$
Define \(X_{n+1}\) in an analogous way, i.e., \(X_{n+1}^{\varvec{j}} \mathrel {:=}\bigvee _l X_{N_{n+1}^l}\), and \(X_{n+1} \mathrel {:=}X_{n+1}^{\varvec{j}} \amalg \coprod _{j \ne \varvec{j}} \{ \theta _{n+1}^j \}\). Now, for fixed n, our goal is to construct a homomorphism \(\psi \) realizing the homomorphism \(\gamma \) in \(K_0\).
The map \(\bigvee _l \Psi _{\tau _{lk}}^*: \, \bigvee _l X_{N_{n+1}^l} \rightarrow X_{N_n^k}\) induces the dual homomorphism \(\psi _{\tau }^k: \, C(X_{N_n^k}) \rightarrow C(X_{n+1}^{\varvec{j}})\). Here \(\Psi _{\tau _{lk}}\) are the maps from Lemma 3.5. The direct sum \(\bigoplus _k \psi _{\tau }^k: \, \bigoplus _k C(X_{N_n^k}) \rightarrow M_{\#_0(k)}(C(X_{n+1}^{\varvec{j}}))\) restricts to a homomorphism \(\psi _{\tau }: \, C(X_n^{\varvec{i}}) = C(\bigvee _k X_{N_n^k}) \rightarrow M_{\#_0(k)}(C(X_{n+1}^{\varvec{j}}))\).
Let \(p^{(\varvec{i})} \in M_2(C(X_{n+1}^{\varvec{j}})) = M_2(C(\bigvee _l X_{N_{n+1}^l}))\) be given by \(p^{(\varvec{i})} \vert _{C(X_{N_{n+1}^l})} = M_2(\Psi _{t_{l \varvec{i}}})(p_{X_{N_{n+1}^l}})\). Define \(\psi _t\) as the composite
$$\begin{aligned} C(X_n^{\varvec{i}}) \overset{{\text {ev}}_{\theta _n^{\varvec{i}}}}{\longrightarrow } {\mathbb {C}}\rightarrow M_2(C(X_{n+1}^{\varvec{j}})), \ \text {where the second map is given by} \ 1 \mapsto p^{(\varvec{i})}. \end{aligned}$$
Moreover, define \(\psi _{\varvec{j} \varvec{i}}: \, C(X_n^{\varvec{i}}) \rightarrow M_{{\hat{\gamma }}_{\varvec{j} \varvec{i}} + 1}(C(X_{n+1}^{\varvec{j}}))\) by setting
$$\begin{aligned} \psi _{\varvec{j} \varvec{i}}(f) = \begin{pmatrix} f(\theta _n^{\varvec{i}}) &{} &{} &{} &{} \\ &{} \ddots &{} &{} &{} \\ &{} &{} f(\theta _n^{\varvec{i}}) &{} &{} \\ &{} &{} &{} \psi _{\tau }(f) &{} \\ &{} &{} &{} &{} \psi _t(f) \end{pmatrix} \end{aligned}$$
where we put \({\hat{\gamma }}_{\varvec{j} \varvec{i}} - \#_0(k) - 1\) copies of \(f(\theta _n^{\varvec{i}})\) on the diagonal.
For \(i \ne \varvec{i}\), let \(p^{(i)} \in M_2(C(X_{n+1}^{\varvec{j}})) = M_2(C(\bigvee _l X_{N_{n+1}^l}))\) be given by \(p^{(i)} \vert _{C(X_{N_{n+1}^l})} = M_2(\Psi _{t_{l i}})(p_{X_{N_{n+1}^l}})\). Define
$$\begin{aligned} \psi _{\varvec{j} i}: \, C(\{ \theta _n^i \}) = {\mathbb {C}}\rightarrow M_{{\hat{\gamma }}_{\varvec{j} i} + 1}(C(X_{n+1}^{\varvec{j}})) \ \text {by sending} \ 1 \in {\mathbb {C}}\ \mathrm{to} \ \begin{pmatrix} 1 &{} &{} &{} \\ &{} \ddots &{} &{} \\ &{} &{} 1 &{} \\ &{} &{} &{} p^{(i)} \end{pmatrix}, \end{aligned}$$
where we put \({\hat{\gamma }}_{\varvec{j} i} - 1\) copies of 1 on the diagonal.
For \(j \ne \varvec{j}\), define
$$\begin{aligned} \psi _{j \varvec{i}}: \, C(X_n^{\varvec{i}}) \rightarrow M_{{\hat{\gamma }}_{j \varvec{i}}}(C(\{ \theta _{n+1}^{j} \})), \, f \mapsto \begin{pmatrix} f(\theta _n^{\varvec{i}}) &{} &{} \\ &{} \ddots &{} \\ &{} &{} f(\theta _n^{\varvec{i}}) \end{pmatrix}, \end{aligned}$$
where we put \({\hat{\gamma }}_{j \varvec{i}}\) copies of \(f(\theta _n^{\varvec{i}})\) on the diagonal.
For \(i \ne \varvec{i}\) and \(j \ne \varvec{j}\), define
$$\begin{aligned} \psi _{ji}: \, C(\{ \theta _n^{i} \}) \rightarrow M_{{\hat{\gamma }}_{ji}}(C(\{ \theta _{n+1}^{j} \})), \, 1 \mapsto \begin{pmatrix} 1 &{} &{} \\ &{} \ddots &{} \\ &{} &{} 1 \end{pmatrix}, \end{aligned}$$
where we put \({\hat{\gamma }}_{ji}\) copies of 1 on the diagonal.
To unify notation, let us set \(X_n^i = \{ \theta _n^i \}\), \(X_{n+1}^j = \{ \theta _{n+1}^j \}\).
For \(u_n = (([n,\varvec{i}],\tau _n),([n,i])_{i \ne \varvec{i}}) \in H_n^+\), let \(s(n,\varvec{i})\) be a positive integer and \(P_n^{\varvec{i}} \in M_{s(n,\varvec{i})}(C(X_n^{\varvec{i}}))\) a projection such that:
\(P_n^{\varvec{i}}\) is a sum of line bundles;
\([P_n^{\varvec{i}}]\) corresponds to \(([n,\varvec{i}],\tau _n)\) under the identification in Lemma 3.7;
\(P_n^{\varvec{i}}(\theta _n^{\varvec{i}}) = 1_{[n,\varvec{i}]}\) is of the form
$$\begin{aligned} u^* \begin{pmatrix} 1 &{} &{} &{} &{} &{} \\ &{} \ddots &{} &{} &{} &{} \\ &{} &{} 1 &{} &{} &{} \\ &{} &{} &{} 0 &{} &{} \\ &{} &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} &{} 0 \end{pmatrix} u, \ \ \ \mathrm{where} \ u \ \text {is a permutation matrix}. \end{aligned}$$
\(P_n^{\varvec{i}}\) exists because of Lemma 3.2. Moreover, we can extend \(P_n^{\varvec{i}}\) by \(1_{[n,i]}\) to a projection in \(\bigoplus _i M_{s(n,i)}(C(X_n^i))\) such that \([P_n]\) corresponds to \(u_n\) under the isomorphism in Lemma 3.7. Here \(s(n,i) = [n,i]\) whenever \(i \ne \varvec{i}\). Then
$$\begin{aligned} \left( M_{s(n,i)}(\psi _{ji}) \right) _{ji}: \, \bigoplus _i M_{s(n,i)}(C(X_n^i)) \rightarrow \bigoplus _j M_{s(n+1,j)}(C(X_{n+1}^j)) \end{aligned}$$
sends \(P_n\) to \(P_{n+1}\), where \(P_{n+1}\) is of the same form as \(P_n\), with \([P_{n+1}]\) corresponding to \(u_{n+1}\) under the isomorphism from Lemma 3.7. Hence the map in (9) restricts to a unital homomorphism
$$\begin{aligned} P_n M_{\infty }(C(X_n)) P_n \rightarrow P_{n+1} M_{\infty }(C(X_{n+1})) P_{n+1} \end{aligned}$$
which in \(K_0\) induces \(\gamma \) by Lemma 3.5.
Now we turn to \(K_1\). Assume \(K_n = \bigoplus _i K_n^i\) is an abelian group, where for a distinguished index \(\varvec{i}\), \(K_n^{\varvec{i}} = T_n\) is a finitely generated torsion group \(T_n = \bigoplus _k T_n^k\), \(T_n^k = {\mathbb {Z}}/ N_n^k\), and \(K_n^i = {\mathbb {Z}}\) for all \(i \ne \varvec{i}\). Similarly, let \(K_{n+1} = \bigoplus _j K_{n+1}^j\) be an abelian group, where for a distinguished index \(\varvec{j}\), \(K_{n+1}^{\varvec{j}} = T_{n+1}\) is a finitely generated torsion group \(T_{n+1} = \bigoplus _l T_{n+1}^l\), \(T_{n+1}^l = {\mathbb {Z}}/ N_{n+1}^l\), and \(K_{n+1}^j = {\mathbb {Z}}\) for all \(j \ne \varvec{j}\). For fixed n, let \(\chi : \, K_n \rightarrow K_{n+1}\) be a homomorphism which is a sum \(\chi = {\hat{\chi }} + \tau + t\), where \({\hat{\chi }}: \, \bigoplus _{i \ne \varvec{i}} K_n^i \rightarrow \bigoplus _{j \ne \varvec{j}} K_{n+1}^j\) is given by an integer matrix \(({\hat{\chi }}_{ji})\) (viewing \({\hat{\chi }}_{ji}\) as a homomorphism \(K_n^i \rightarrow K_{n+1}^j\)), \(\tau : \, T_n \rightarrow T_{n+1}\) is given by an integer matrix \((\tau _{lk})\) (viewing \(\tau _{lk}\) as a homomorphism \(T_n^k \rightarrow T_{n+1}^l\)), and \(t: \, \bigoplus _{i \ne \varvec{i}} K_n^i \rightarrow T_{n+1}\) is given by an integer matrix \((t_{li})\) (viewing \(t_{li}\) as a homomorphism \(K_n^i \rightarrow T_{n+1}^l\)). We can always arrange that all the entries of these matrices are positive integers.
The following is a direct consequence of Lemma 3.3.
Let \(Y_n^{\varvec{i}} = \bigvee _k Y_{N_n^k}\) and \(Y_n = Y_n^{\varvec{i}} \vee \bigvee _{i \ne \varvec{i}} S^3\). Then \(K_0(C(Y_n)) \cong {\mathbb {Z}}\) and \(K_1(C(Y_n)) \cong K_n\).
We view \(Y_n\) as a pointed space, and let \(\theta _n\) be the base point of \(Y_n\). Now let \(\psi _{\tau }^k: \, C(Y_{N_n^k}) \rightarrow C(\bigvee _l Y_{N_{n+1}^l}) = C(Y_{n+1}^{\varvec{j}})\) be the dual homomorphism of the map \(\bigvee _l \Sigma (\Psi _{\tau _{lk}}^*): \, Y_{n+1}^{\varvec{j}} = \bigvee _l Y_{N_{n+1}^l} \rightarrow Y_{N_n^k}\). Here \(\Sigma (\Psi _{\tau _{lk}}^*)\) are the maps from Lemma 3.6. The direct sum \(\bigoplus _k \psi _{\tau }^k: \, \bigoplus _k C(Y_{N_n^k}) \rightarrow M_{\#_1(k)}(C(Y_{n+1}^{\varvec{j}}))\) restricts to a homomorphism \(\psi _{\varvec{i}}: \, C(Y_n^{\varvec{i}}) = C(\bigvee _k Y_{N_n^k}) \rightarrow M_{\#_1(k)}(C(Y_{n+1}^{\varvec{j}})) \hookrightarrow M_{\#_1(k)}(C(Y_{n+1}))\).
For \(i \ne \varvec{i}\), define \(\psi _{\varvec{j}i}: \, C(Y_n^i) = C(S^3) \rightarrow C(Y_{n+1}^{\varvec{j}})\) as the dual map of the composite
$$\begin{aligned} Y_{n+1}^{\varvec{j}} = \bigvee _l Y_{N_{n+1}^l} \overset{\bigvee _l \Sigma (\psi _{t_{li}}^*)}{\longrightarrow } \bigvee _l Y_{N_{n+1}^l} \overset{\bigvee _l \Omega _l^*}{\longrightarrow } S^3, \end{aligned}$$
where \(\Omega ^*_l\) is the map \(Y_{N_{n+1}^l} \rightarrow S^3\) constructed in Lemma 3.4.
For \(i \ne \varvec{i}\) and \(j \ne \varvec{j}\), define \(\psi _{ji}: \, C(Y_n^i) = C(S^3) \rightarrow C(S^3) = C(Y_{n+1}^j)\) as the dual map of \(\Sigma \Sigma \mu _{{\hat{\chi }}_{ji}}: \, S^3 \cong \Sigma \Sigma S^1 \rightarrow \Sigma \Sigma S^1 \cong S^3\), where \(\mu _{{\hat{\chi }}_{ji}}\) is the map from Lemma 3.1.
For every \(i \ne \varvec{i}\), we thus obtain the direct sum \(\bigoplus _j \psi _{ji}: \, C(Y_n^i) \rightarrow \bigoplus _j C(Y_{n+1}^j)\) with image in \(C(Y_{n+1}) = C(\bigvee _j Y_{n+1}^j) \subseteq \bigoplus _j C(Y_{n+1}^j)\). Hence we obtain a homomorphism \(\psi _i: \, C(Y_n^i) \rightarrow C(Y_{n+1})\).
Now let \(\#_1(i)\) be the number of summands of \(K_n\). Then let \(\psi : \, C(Y_n) \rightarrow M_{\#_1(k) + \#_1(i) - 1}(C(Y_{n+1}))\) be the restriction of \(\bigoplus _i \psi _i\) to \(C(Y_n) = C(\bigvee _i Y_n^i) \subseteq \bigoplus _i C(Y_n^i)\). By construction, and using Lemmas 3.4 and 3.6 , \(\psi \) induces \(\chi \) in \(K_1\).
We now combine our two constructions. Define \(Z_n = X_n \vee Y_n\), where we identify the base point \(\theta _n^{\varvec{i}} \in X_n^{\varvec{i}} \subseteq X_n\) with \(\theta _n \in Y_n\). We extend \(P_n\) from \(X_n\) constantly to \(Y_n\) (with value \(P_n(\theta _n^{\varvec{i}})\)). Note that \(\mathrm{rk}\,(P_{n+1}(\theta _{n+1}^{\varvec{j}})) = {\hat{\gamma }}_{\varvec{j} \varvec{i}} \cdot \mathrm{rk}\,(P_n (\theta _n^{\varvec{i}}))\). Because of (2) and (6), we can arrange \({\hat{\gamma }}_{\varvec{j} \varvec{i}} \ge \#_1(k) + \#_1(i) - 1\). By adding \({\text {ev}}_{\theta _n}\) on the diagonal if necessary, we can modify \(\psi \) to a homomorphism \(\psi : \, C(Y_n) \rightarrow M_{{\hat{\gamma }}_{\varvec{j} \varvec{i}}}(C(Y_{n+1}))\) which induces \(\gamma \) in \(K_1\). We can thus think of \(M_{[n,\varvec{i}]}(\psi )\) as a unital homomorphism \(P_n(\theta _n^{\varvec{i}}) M_{s(n,\varvec{i})}(C(Y_n)) P_n(\theta _n^{\varvec{i}}) \rightarrow P_{n+1}(\theta _{n+1}^{\varvec{j}}) M_{s(n+1,\varvec{j})}(C(Y_{n+1})) P_{n+1}(\theta _{n+1}^{\varvec{j}})\), i.e., as a unital homomorphism \(P_n M_{s(n,\varvec{i})}(C(Y_n)) P_n \rightarrow P_{n+1} M_{s(n+1,\varvec{j})}(C(Y_{n+1})) P_{n+1}\). In combination with the homomorphism (10), we obtain a unital homomorphism
$$\begin{aligned} P_n M_{\infty }(C(Z_n)) P_n \rightarrow P_{n+1} M_{\infty }(C(Z_{n+1})) P_{n+1} \end{aligned}$$
which induces \(\gamma \) in \(K_0\), sending \(u_n\) to \(u_{n+1}\), and \(\chi \) in \(K_1\).
Evaluation at \(\theta _n^{\varvec{i}} = \theta _n\) and \(\theta _n^i\) (for \(i \ne \varvec{i}\)) induces a quotient homomorphism which fits into a commutative diagram
If all \(K_n\) are torsion-free, then we can replace \(S^3\) by \(S^1\) in our construction of \(Y_n\).
4 The complete construction
4.1 The general construction with concrete models
Applying our construction in Sect. 3, we obtain concrete models for \(F_n\), \({\hat{F}}_n\), \(\gamma _n\) and \({\hat{\gamma }}_n\) which we now plug into the general construction in Sects. 2.1 and 2.2. Note that it is crucial that we work with these concrete models from Sect. 3. The reason is that only for these models can we provide groupoid descriptions of the C*-algebras and their homomorphisms which arise in the general construction (see Sect. 6).
Note that with these concrete models, the composition
$$\begin{aligned} M_{[n,i]} \hookrightarrow {\hat{F}}_n \overset{\beta _{\bullet }}{\longrightarrow } E_n \twoheadrightarrow M_{\left\{ n,p \right\} }, \end{aligned}$$
where the first and third maps are the canonical ones, is of the form
$$\begin{aligned} x \mapsto u^* \begin{pmatrix} x &{} &{} &{} &{} &{} \\ &{} \ddots &{} &{} &{} &{} \\ &{} &{} x &{} &{} &{} \\ &{} &{} &{} 0 &{} &{} \\ &{} &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} &{} 0 \end{pmatrix} u \end{aligned}$$
for a permutation matrix u.
Apart from inserting these concrete models, we keep the same construction as in Sects. 2.1 and 2.2.
4.2 Summary of the construction
In both the unital and stably projectionless cases, the C*-algebra with the prescribed Elliott invariant which we constructed is an inductive limit whose building blocks are of the form
$$\begin{aligned} A_n = \left\{ (f,a) \in C([0,1],E_n) \oplus F_n \text {: }f(t) = \beta _t(a) \ \mathrm{for} \ t = 0,1 \right\} , \end{aligned}$$
\(E_n\) is finite dimensional;
\(F_n\) is homogeneous of the form \(P_n M_{\infty }(Z_n) P_n\), where \(P_n\) is a sum of line bundles, and there are points \(\theta _n^i \in Z_n\), one for each connected component, and all connected components just consist of \(\theta _n^i\) with the only possible exception being the component of a distinguished point \(\theta _n^{\varvec{i}}\);
both \(\beta _0\) and \(\beta _1\) are compositions of the form \(F_n \rightarrow \bigoplus _i M_{[n,i]} \rightarrow E_n\), where the first homomorphism is given by evaluation in \(\theta _n^i \in Z_n\) and the second homomorphism is determined by the composites \(M_{[n,i]} \hookrightarrow \bigoplus _i M_{[n,i]} \rightarrow E_n \twoheadrightarrow E_n^p\) (where \(E_n^p\) is a matrix block of \(E_n\)), which are of the form
$$\begin{aligned} x \mapsto v^* \begin{pmatrix} x &{} &{} &{} &{} &{} \\ &{} \ddots &{} &{} &{} &{} \\ &{} &{} x &{} &{} &{} \\ &{} &{} &{} 0 &{} &{} \\ &{} &{} &{} &{} \ddots &{} \\ &{} &{} &{} &{} &{} 0 \end{pmatrix} v \end{aligned}$$
for a permutation matrix v.
The connecting maps \(\varphi _n\) of our inductive limit can be described as two parts:
$$\begin{aligned}&A_n \rightarrow A_{n+1} \twoheadrightarrow F_{n+1}; \end{aligned}$$
$$\begin{aligned}&A_n \rightarrow A_{n+1} \twoheadrightarrow C([0,1],E_{n+1}). \end{aligned}$$
Both parts send \((f,a) \in A_n\) to an element which is in diagonal form up to permutation, i.e.,
$$\begin{aligned} u^* \begin{pmatrix} * &{} &{} \\ &{} * &{} \\ &{} &{} \ddots \end{pmatrix} u, \end{aligned}$$
where for the entries on the diagonal, there are the following possibilities:
a map of the form
$$\begin{aligned}&[0,1] \ni t \mapsto f^p(\lambda (t)), \ \mathrm{for} \ \mathrm{a} \ \mathrm{continuous} \ \mathrm{map} \ \lambda : \, [0,1] \rightarrow [0,1] \nonumber \\&\mathrm{with} \ \lambda ^{-1}(\left\{ 0,1 \right\} ) \subseteq \left\{ 0,1 \right\} , \end{aligned}$$
where \(f^p\) is the image of f under the canonical projection \(C([0,1],E_n) \twoheadrightarrow C([0,1],E_n^p)\);
$$\begin{aligned}{}[0,1] \ni t \mapsto \tau (t) a(x(t)), \end{aligned}$$
where \(x: \, [0,1] \rightarrow Z_n\) is continuous and \(\tau (t): \, P_n(x(t)) M_{\infty } P_n(x(t)) \cong P_n(\theta _n^i) M_{\infty }P_n(\theta _n^i)\) is an isomorphism depending continuously on t, with \(\theta _n^i\) in the same connected component as x(t), and \(\tau (t) = \mathrm{id}\) if \(x(t) = \theta _n^i\);
an element of \(P_{n+1} M_{\infty }(C(Z_{n+1})) P_{n+1}\) with support in an isolated point \(\theta _{n+1}^j\), which is of the form
$$\begin{aligned} f^p(\varvec{t}), \ \mathrm{for} \ \mathrm{some} \ \varvec{t} \in (0,1), \end{aligned}$$
$$\begin{aligned}&\tau (a(x)) \ \ \ \text {for some} \ x \in Z_n \ \mathrm{with} \ x \notin \{ \theta _n^i \}_i \ \nonumber \\&\text {and an isomorphism} \ \tau : \, P_n(x) M_{\infty } P_n(x) \cong P_n(\theta _n^i) M_{\infty } P_n(\theta _n^i),\qquad \end{aligned}$$
where \(\theta _n^i\) is in the same connected component as x;
$$\begin{aligned} a(\theta _n^i), \ \mathrm{where} \ \theta _n^i \ \text {is an isolated point in} \ Z_n; \end{aligned}$$
an element of \(P_{n+1} M_{\infty }(C(Z_{n+1})) P_{n+1}\), which is of the form
$$\begin{aligned} (a_{ij} \cdot q)_{ij}, \ \mathrm{where} \ q \ \text {is a line bundle over} \ Z_{n+1}, \ \mathrm{and} \ (a_{ij}) = a(\theta _n^i);\qquad \end{aligned}$$
an element of \(P_{n+1} M_{\infty }(C(Z_{n+1})) P_{n+1}\) of the form
$$\begin{aligned} a \circ \lambda , \end{aligned}$$
where \(\lambda : \, Z_{n+1} \rightarrow Z_n\) is a continuous map whose image is only contained in one wedge summand of \(Z_n\) (see our constructions in Sect. 3).
Note that in (17) and (19), we identify \(P_n(\theta _n^{\varvec{i}}) M_{\infty } P_n(\theta _n^{\varvec{i}})\) with \(M_{[n,\varvec{i}]}\) via a fixed isomorphism.
Let \(P^{\varvec{a}} \in M(A_{n+1})\) be projections, with \(\sum _{\varvec{a}} P^{\varvec{a}} = 1\), giving rise to the diagonal form in (15), and let \(\varphi ^{\varvec{a}}\) be the homomorphism \(A_n \rightarrow P_{\varvec{a}} A_{n+1} P_{\varvec{a}}, \, x \mapsto P^{\varvec{a}} u \varphi (a) u^* P^{\varvec{a}}\). Since each of the \(P^{\varvec{a}}\) either lies in \(C([0,1],E_{n+1}^q)\) or \(F_{n+1}\), we have \(\mathrm{im\,}(\varphi ^{\varvec{a}}) \subseteq P^{\varvec{a}} C([0,1],E_{n+1}^q) P^{\varvec{a}}\) or \(\mathrm{im\,}(\varphi ^{\varvec{a}}) \subseteq P^{\varvec{a}} F_{n+1} P^{\varvec{a}}\). Then both maps in (13), (14) are of the form \(u^* (\bigoplus _{\varvec{a}} \varphi ^{\varvec{a}}) u\). The unitary u is a permutation matrix for the map in (13) and is a unitary in \(C([0,1],E_{n+1})\) such that u(0) and u(1) are permutation matrices for the map in (14).
Let us write \(C_n \mathrel {:=}C([0,1],E_n)\) and \(u_{n+1} \in C_{n+1}\) for the unitary for the map in (14). The only reason we need \(u_{n+1}\) is to ensure that we send (f, a) to an element satisfying the right boundary conditions at \(t=0\) and \(t=1\). For this, only the values \(u_{n+1,t} \mathrel {:=}u_{n+1}(t)\) at \(t \in \left\{ 0,1 \right\} \) matter. Therefore, by an iterative process, we can change \(\beta _t\) in order to arrange \(u_{n+1}=1\) for the map in (14): First of all, it is easy to see that \(\varphi _n\) extends uniquely to a homomorphism \(\Phi _n: \, C_n \oplus F_n \rightarrow C_{n+1} \oplus F_{n+1}\). Let us write \(\Phi _n^C\) and \(\Phi _n^F\) for the composites
$$\begin{aligned}&C_n \oplus F_n \overset{\Phi _n}{\longrightarrow } C_{n+1} \oplus F_{n+1} \twoheadrightarrow C_{n+1} \ \mathrm{and}\nonumber \\&\quad C_n \oplus F_n \overset{\Phi _n}{\longrightarrow } C_{n+1} \oplus F_{n+1} \twoheadrightarrow F_{n+1}. \end{aligned}$$
As \(\varphi _n\) sends strictly positive elements to strictly positive elements, \(\Phi _n\) is unital. Now, for all n, let \(\Lambda _n(t) \subseteq [0,1]\) be a finite set such that for all \((f_n,a_n) \in A_n\) with \(\varphi (f_n,a_n) = (f_{n+1},a_{n+1}) \in A_{n+1}\), \(f_n \vert _{\Lambda _n(t)} \equiv 0\) implies \(f_{n+1}(t) = 0\). In other words, \(\Lambda _n(t)\) are the evaluation points for \(f_{n+1}(t)\). Similarly, let \(T_n \subseteq (0,1)\) be such that for all \((f_n,a_n) \in A_n\) with \(\varphi (f_n,a_n) = (f_{n+1},a_{n+1}) \in A_{n+1}\), \(f_n \vert _{T_n} \equiv 0\) and \(a_n = 0\) imply \(a_{n+1} = 0\). Now we choose inductively on n unitaries \(v_n \in U(C_n)\) and \(u_{n+1} \in U(C_{n+1})\) such that, for all n, \(v_n(s) = 1\) for all \(s \in (\Lambda _n(0) \cup \Lambda _n(1) \cup T_n) \setminus \left\{ 0,1 \right\} \), \(u_{n+1}(t) = u_{n+1,t}\) for \(t \in \left\{ 0,1 \right\} \), and \(v_{n+1} = \Phi _n^C(v_n,1) u_{n+1}^*\): Simply start with \(v_1 \mathrel {:=}1\), and if \(v_n\) and \(u_n\) have been chosen, choose \(u_{n+1} \in U(C_{n+1})\) such that \(u_{n+1}(t) = u_{n+1,t}\) for all \(t \in \left\{ 0,1 \right\} \) and \(u_{n+1}(s) = \Phi _n^C(v_n,1)(s)\) for all \(s \in (\Lambda _n(0) \cup \Lambda _n(1) \cup T_n) \setminus \left\{ 0,1 \right\} \), and set \(v_{n+1} \mathrel {:=}\Phi _n^C(v_n,1) u_{n+1}^*\). If we now take this \(u_{n+1}\) for the map in (14) giving rise to \(\varphi _n\) and \(\Phi _n\), then we obtain a commutative diagram
which restricts to
where the unitary \({\bar{u}}_{n+1}\) for the map in (14) for \({\bar{\varphi }}_n\) is now trivial, \({\bar{u}}_{n+1} = 1\), and \({\bar{A}}_n\) is of the same form (12) as \(A_n\), with \({\bar{\beta }}_t = v_n(t)^* \beta _t v_n(t)\) of the same form as \(\beta _t\) for \(t = 0, 1\) (the point being that \(v_n(t)\) is a permutation matrix). Obviously, we have \(\varinjlim \left\{ {\bar{A}}_n;{\bar{\varphi }}_n \right\} \cong \varinjlim \left\{ A_n;\varphi _n \right\} \).
Note that the construction described in Sect. 4.2 also encompasses (a slight modification of) the C*-algebra construction in [19, § 6]. (In particular, one obtains model algebras of rational generalized tracial rank one, in the sense of [19].)
5 Inductive limits and Cartan pairs revisited
In this section, we improve the main result in [4, § 3] and give a C*-algebraic interpretation. Let us first recall [4, Theorem 3.6]. We use the same notations and definitions as in [4, 36]. We start with the following
We can drop the assumptions of second countability for groupoids and separability for C*-algebras in [36] if we replace "topologically principal" by "effective" throughout. In other words, given a twisted étale effective groupoid \((G,\Sigma )\), i.e., a twisted étale groupoid \((G,\Sigma )\) where G is effective (not necessarily second countable), \((C^*_r(G,\Sigma ),C_0(G^{(0)}))\) is a Cartan pair; and conversely, given a Cartan pair (A, B) (where A is not necessarily separable), the Weyl twist \((G(A,B),\Sigma (A,B))\) from [36] is a twisted étale effective groupoid. These constructions are inverse to each other, i.e., there are canonical isomorphisms \((G,\Sigma ) \cong (G(C^*_r(G,\Sigma ),C_0(G^{(0)})),\Sigma (C^*_r(G,\Sigma ),C_0(G^{(0)})))\) (provided by [36, 4.13, 4.15, 4.16]) and \((A,B) \cong (C^*_r(G(A,B),\Sigma (A,B)),C_0(G(A,B)^{(0)}))\) (provided by [36, 5.3, 5.8, 5.9]). Similarly, everything in [4, § 3] works without the assumption of second countability. In particular, [4, Theorem 3.6] holds for general twisted étale groupoids if we replace "topologically principal" by "effective". This is why in this section, we formulate everything for twisted étale effective groupoids and general Cartan pairs. In our applications later on, however, we will only consider second countable groupoids and separable C*-algebras.
Now suppose that \((A_n,B_n)\) are Cartan pairs, let \((G_n,\Sigma _n)\) be their Weyl twists, and set \(X_n \mathrel {:=}G_n^{(0)}\). Let \(\varphi _n: \, A_n \rightarrow A_{n+1}\) be injective *-homomorphisms. Assume that there are twisted groupoids \((H_n,T_n)\), with \(Y_n \mathrel {:=}H_n^{(0)}\), together with twisted groupoid homomorphisms \((i_n,\imath _n): \, (H_n,T_n) \rightarrow (G_{n+1},T_{n+1})\) and \(({\dot{p}}_n,p_n): \, (H_n,T_n) \rightarrow (G_n,T_n)\) such that \(i_n: \, H_n \rightarrow G_{n+1}\) is an embedding with open image, and \({\dot{p}}_n: \, H_n \rightarrow G_n\) is surjective, proper, and fibrewise bijective (i.e., for every \(y \in Y_n\), \({\dot{p}}_n \vert _{(H_n)_y}\) is a bijection onto \((G_n)_{{\dot{p}}_n(y)}\)). Suppose that \(\varphi _n = (\imath _n)_* \circ p_n^*\) for all n. Further assume that condition (LT) is satisfied, i.e., for every continuous section \(\rho : \, U \rightarrow \rho (U)\) for the canonical projection \(\Sigma _n \twoheadrightarrow G_n\), where \(U \subseteq G_n\) is open, there is a continuous section \(\tilde{\rho }: \, {\dot{p}}_n^{-1}(U) \rightarrow \tilde{\rho }({\dot{p}}_n^{-1}(U))\) for the canonical projection \(T_n \twoheadrightarrow H_n\) such that \(\tilde{\rho }({\dot{p}}_n^{-1}(U)) \subseteq {\dot{p}}_n^{-1}(\rho (U))\) and \(p_n \circ \tilde{\rho } = \rho \circ {\dot{p}}_n\). Also assume that condition (E) is satisfied, i.e., for every continuous section \(t: \, U \rightarrow t(U)\) for the source map of \(G_n\), where \(U \subseteq X_n\) and \(t(U) \subseteq G_n\) are open, there is a continuous section \(\tilde{t}: \, {\dot{p}}_n^{-1}(U) \rightarrow \tilde{t}({\dot{p}}_n^{-1}(U))\) for the source map of \(H_n\) such that \(\tilde{t}({\dot{p}}_n^{-1}(U)) \subseteq {\dot{p}}_n^{-1}(t(U))\) and \({\dot{p}}_n \circ \tilde{t} = t \circ {\dot{p}}_n\).
In this situation, define
$$\begin{aligned}&\Sigma _{n,0} \mathrel {:=}\Sigma _n \ \mathrm{and} \ \Sigma _{n,m+1} \mathrel {:=}p_{n+m}^{-1}(\Sigma _{n,m}) \subseteq T_{n+m} \ \mathrm{for \ all} \ n \ \mathrm{and} \ m = 0, 1, \cdots ,\nonumber \\&G_{n,0} \mathrel {:=}G_n \ \mathrm{and} \ G_{n,m+1} \mathrel {:=}{\dot{p}}_{n+m}^{-1}(G_{n,m}) \subseteq H_{n+m} \ \mathrm{for \ all} \ n \ \mathrm{and} \ m = 0, 1, \cdots , \nonumber \\&{\bar{\Sigma }}_n \mathrel {:=}\varprojlim _m \left\{ \Sigma _{n,m}; p_{n+m} \right\} \ \mathrm{and} \ {\bar{G}}_n \mathrel {:=}\varprojlim _m \left\{ G_{n,m}; {\dot{p}}_{n+m} \right\} \ \mathrm{for \ all} \ n. \end{aligned}$$
Then [4, Theorem 3.6] tells us that
(a) \(({\bar{G}}_n,{\bar{\Sigma }}_n)\) are twisted groupoids, and \((i_n,\imath _n)\) induce twisted groupoid homomorphisms \(({\bar{i}}_n,{\bar{\imath }}_n): \, ({\bar{G}}_n,{\bar{\Sigma }}_n) \rightarrow ({\bar{G}}_{n+1},{\bar{\Sigma }}_{n+1})\) such that \({\bar{i}}_n\) is an embedding with open image for all n, and
$$\begin{aligned} {\bar{\Sigma }} \mathrel {:=}\varinjlim \left\{ {\bar{\Sigma }}_n; {\bar{\imath }}_n \right\} \ \mathrm{and} \ {\bar{G}} \mathrel {:=}\varinjlim \left\{ {\bar{G}}_n; {\bar{i}}_n \right\} \end{aligned}$$
defines a twisted étale groupoid \(({\bar{G}},{\bar{\Sigma }})\),
(b) & (c) \((\varinjlim \left\{ A_n;\varphi _n \right\} ,\varinjlim \left\{ B_n;\varphi _n \right\} )\) is a Cartan pair whose Weyl twist is given by \(({\bar{G}},{\bar{\Sigma }})\).
It is clear that the proof of [4, Theorem 3.6] shows that if all \(B_n\) are C*-diagonals, i.e., all \(G_n\) are principal, then \({\bar{G}}\) is principal, i.e., \(\varinjlim \left\{ B_n;\varphi _n \right\} \) is a C*-diagonal.
It turns out that conditions (LT) and (E) are redundant.
In the situation above, conditions (LT) and (E) are automatically satisfied.
To prove condition (LT), let \(\rho : \, U \rightarrow \rho (U)\) be a continuous section for the canonical projection \(\pi _n: \, \Sigma _n \twoheadrightarrow G_n\), where \(U \subseteq G_n\) is open. Let \(\pi _{n+1}: \, \Sigma _{n+1} \twoheadrightarrow G_{n+1}\) be the canonical projection. Then \(\pi _{n+1} \vert _{p_n^{-1}(\rho (U))}: \, p_n^{-1}(\rho (U)) \rightarrow {\dot{p}}_n^{-1}(U)\) is bijective. Indeed, given \(\tau _1, \tau _2 \in p_n^{-1}(\rho (U))\) with \(\pi _{n+1}(\tau _1) = \pi _{n+1}(\tau _2) \mathrel {=:}\eta \in H_n\), we must have \(\tau _2 = z \cdot \tau _1\) for some \(z \in {\mathbb {T}}\). Also, \(\pi _n(p_n(\tau _1)) = {\dot{p}}_n(\eta ) = \pi _n(p_n(\tau _1))\). As \(\pi _n \vert _{\rho (U)}: \, \rho (U) \rightarrow U\) is bijective (with inverse \(\rho \)), we deduce \(p_n(\tau _1) = p_n(\tau _2)\). Hence \(p_n(\tau _1) = p_n(\tau _2) = z \cdot p_n(\tau _1)\), which implies \(z = 1\), i.e., \(\tau _2 = \tau _1\). This proves injectivity, and surjectivity is easy to see. As \(\pi _{n+1}\) is open, \(\tilde{\rho } \mathrel {:=}(\pi _{n+1} \vert _{p_n^{-1}(\rho (U))})^{-1}: \, {\dot{p}}_n^{-1}(U) \rightarrow p_n^{-1}(\rho (U))\) is the continuous section we are looking for.
To verify (E), let \(t: \, U \rightarrow t(U)\) be a continuous section for the source map \(s_n\) of \(G_n\), where \(U \subseteq X_n\) and \(t(U) \subseteq G_n\) are open. Let \(s_{n+1}\) be the source map of \(H_n\). Then \(s_{n+1} \vert _{{\dot{p}}_n^{-1}(t(U))}: \, {\dot{p}}_n^{-1}(t(U)) \rightarrow {\dot{p}}_n^{-1}(U)\) is bijective. Indeed, given \(\eta _1, \eta _2 \in {\dot{p}}_n^{-1}(t(U))\) with \(s_{n+1}(\eta _1) = s_{n+1}(\eta _2) \mathrel {=:}y \in Y_n\), we must have \(s_n({\dot{p}}_n(\eta _1)) = {\dot{p}}_n(y) = s_n({\dot{p}}_n(\eta _2))\). As \(s_n \vert _{t(U)}: \, t(U) \rightarrow U\) is bijective (with inverse t), we deduce \({\dot{p}}_n(\eta _1) = {\dot{p}}_n(\eta _2)\). Since \({\dot{p}}_n\) is fibrewise bijective, this implies \(\eta _1 = \eta _2\). This proves injectivity, and surjectivity is easy to see. As \({\dot{p}}_n^{-1}(t(U))\) is open and \(s_{n+1}\) is open, \(\tilde{t} \mathrel {:=}(s_{n+1} \vert _{{\dot{p}}_n^{-1}(t(U))})^{-1}: \, {\dot{p}}_n^{-1}(U) \rightarrow {\dot{p}}_n^{-1}(t(U))\) is the continuous section we are looking for.
Let us now determine which *-homomorphisms are of the form \(\imath _* \circ p^*\). Let (A, B) and \(({\hat{A}},{\hat{B}})\) be Cartan pairs with normalizers \(N \mathrel {:=}N_A(B)\), \({\hat{N}} \mathrel {:=}N_{{\hat{A}}}({\hat{B}})\) and faithful conditional expectations \(P: \, A \twoheadrightarrow B\), \({\hat{P}}: \, {\hat{A}} \twoheadrightarrow {\hat{B}}\). Let \((G,\Sigma )\) and \(({\hat{G}},{\hat{\Sigma }})\) be the Weyl twists of (A, B) and \(({\hat{A}},{\hat{B}})\). Suppose that \(\varphi : \, A \rightarrow {\hat{A}}\) is an injective *-homomorphism.
Proposition 5.4
The following are equivalent:
\(\varphi (B) \subseteq {\hat{B}}\), \(\varphi (N) \subseteq {\hat{N}}\), \({\hat{P}} \circ \varphi = \varphi \circ P\);
There exists a twisted étale effective groupoid (H, T) and twisted groupoid homomorphisms \((i,\imath ): \, (H,T) \rightarrow ({\hat{G}},{\hat{\Sigma }})\), \(({\dot{p}},p): \, (H,T) \rightarrow (G,\Sigma )\), where i is an embedding with open image and \({\dot{p}}\) is surjective, proper and fibrewise bijective, such that \(\varphi = \imath _* \circ p^*\).
(ii) \(\Rightarrow \) (i): It is easy to see that \((\imath _* \circ p^*)(B) \subseteq {\hat{B}}\). Given an open bisection S of G, \({\dot{p}}^{-1}(S)\) is an open bisection of H, and then \(i({\dot{p}}^{-1}(S))\) is an open bisection of \({\hat{G}}\). Therefore, \((\imath _* \circ p^*)(N) \subseteq {\hat{N}}\). Finally, we have \({\hat{P}} \circ (\imath _* \circ p^*) = (\imath _* \circ p^*) \circ P\) because \({\dot{p}}^{-1}(G^{(0)}) = H^{(0)}\).
(i) \(\Rightarrow \) (ii): Let \(\breve{B}\) be the ideal of \({\hat{B}}\) generated by \(\varphi (B)\), and \(\breve{A} \mathrel {:=}C^*(\varphi (A),\breve{B})\). Then \((\breve{A},\breve{B})\) is a Cartan pair: It is clear that \(\breve{B}\) contains an approximate unit for \(\breve{A}\). To see that \(\breve{B}\) is maximal abelian, take \(a \in \breve{A} \cap (\breve{B})'\). Let \(b \in {\hat{B}}\), and take an approximate unit \((h_{\lambda }) \subseteq \breve{B}\) for \(\breve{A}\). Then \(ba = \lim _{\lambda } b h_{\lambda } a = \lim _{\lambda } a b h_{\lambda } = \lim _{\lambda } a h_{\lambda } b = ab\). Hence \(a \in \breve{A} \cap ({\hat{B}})' = \breve{A} \cap {\hat{B}} = \breve{B}\) (the last equality holds because \(\breve{B}\) contains an approximate unit for \(\breve{A}\), and \(\breve{B} \cdot {\hat{B}} \subseteq \breve{B}\)). This shows \(\breve{A} \cap (\breve{B})' = \breve{B}\). Moreover, we have \(\varphi (N) \subseteq \breve{N} \mathrel {:=}N_{\breve{A}}(\breve{B})\): Let \(n \in \varphi (N)\), \(b \in \breve{B}\), and \((h_{\lambda }) \subseteq B\) be an approximate unit for A. Then \(nbn^* \in {\hat{B}}\) as \(n \in \varphi (N) \subseteq {\hat{N}}\), and thus \(nbn^* = \lim _{\lambda } \varphi (h_{\lambda }) n b n^* \subseteq \overline{\varphi (B) \cdot {\hat{B}}} \subseteq \breve{B}\). Finally, it is clear that \(\breve{P} \mathrel {:=}{\hat{P}} \vert _{\breve{A}}\) is a faithful conditional expectation onto \(\breve{B}\).
Let (H, T) be the Weyl twist attached to \((\breve{A},\breve{B})\), and write \(X \mathrel {:=}G^{(0)}\), \(Y \mathrel {:=}H^{(0)}\) and \({\hat{X}} \mathrel {:=}{\hat{G}}^{(0)}\). It is easy to see that \(\breve{N} \subseteq {\hat{N}}\). Hence we may define maps
$$\begin{aligned}&i: \, H \rightarrow {\hat{G}}, \, [x,\alpha _n,y] \mapsto [x,\alpha _n,y] \ \ \\&\mathrm{and} \imath : \, T \rightarrow {\hat{\Sigma }}, \, [x,n,y] \mapsto [x,n,y], \ \mathrm{for} \ n \in \breve{N}. \end{aligned}$$
Clearly, i and \(\imath \) are continuous groupoid homomorphisms. i is injective since \([x,\alpha _n,y] = [x',\alpha _{n'},y']\) in \({\hat{G}}\) implies \(x = x'\), \(y = y'\) and \(\alpha _n = \alpha _{n'}\) on a neighbourhood \(U \subseteq {\hat{X}}\) of y, so that \(\alpha _n = \alpha _{n'}\) on \(U \cap Y\), which is a neighbourhood of y in Y, and hence \([x,\alpha _n,y] = [x',\alpha _{n'},y']\) in H. The image of i is given by \(\bigcup _{n \in \breve{N}} \left\{ [\alpha _n(y),\alpha _n,y] \text {: }y \in \mathrm{dom\,}(n) \right\} \) which is clearly open in \({\hat{G}}\). Finally, it is easy to see that we have a commutative diagram
where the upper horizontal map is given by inclusion, and the vertical isomorphisms are as in [36, Definition 5.4].
We now proceed to construct \(({\dot{p}},p)\). Since \(A = C^*(N)\) and \(\varphi (N) \subseteq \breve{N}\), it is easy to see that \(\breve{A} = {\overline{\mathrm{span}}}(\varphi (N) \cdot \breve{B})\). It follows that for every \(\breve{n} \in \breve{N}\) and \(y \in \mathrm{dom\,}(\breve{n})\), there is \(n \in \varphi (N)\) such that \(y \in \mathrm{dom\,}(n)\) and \([x,\breve{n},y] = [x,n,y]\) in T. Indeed, for \(a \in \mathrm{span}(\varphi (N) \cdot \breve{B})\) it is clear that \(a \equiv 0\) on \(T \setminus \big ( \bigcup _{n \in \varphi (N)} \left\{ [\alpha _n(y),n,y] \text {: }y \in \mathrm{dom\,}(n) \right\} \big )\). As the latter set is closed in T, we must have \(a \equiv 0\) on \(T \setminus \big ( \bigcup _{n \in \varphi (N)} \left\{ [\alpha _n(y),n,y] \text {: }y \in \mathrm{dom\,}(n) \right\} \big )\) for all \(a \in \breve{A}\). Hence \(T = \bigcup _{n \in \varphi (N)} \left\{ [\alpha _n(y),n,y] \text {: }y \in \mathrm{dom\,}(n) \right\} \). This observation allows us to define the maps
$$\begin{aligned}&{\dot{p}}: \, H \rightarrow G, \, [x,\alpha _{\varphi (n)},y] \mapsto [\varphi ^*(x),\alpha _n,\varphi ^*(y)] \mathrm{and} \\&p: \, T \rightarrow \Sigma , \, [x,\varphi (n),y) \mapsto [\varphi ^*(x),n,\varphi ^*(y)], \ \mathrm{for} \ n \in N, \end{aligned}$$
where \(\varphi ^*: \, Y \rightarrow X\) is the map dual to \(B \rightarrow \breve{B}, \, b \mapsto \varphi (b)\) determined by \(\varphi (b) = b \circ \varphi ^*\) for all \(b \in B\). Note that \(\varphi ^*\) exists since \(\varphi (B)\) is full in \(\breve{B}\). p is well-defined because \([x,\varphi (m),y] = [x,\varphi (n),y]\) implies \({\hat{P}}(\varphi (n)^*\varphi (m))(y) > 0\), so that \(P(n^*m)(\varphi ^*(y)) = \varphi (P(n^*m))(y) = {\hat{P}}(\varphi (n)^* \varphi (m))(y) > 0\), which in turn yields \([\varphi ^*(x), \alpha _m, \varphi ^*(y)] = [\varphi ^*(x), \alpha _n, \varphi ^*(y)]\). Similarly, \({\dot{p}}\) is well-defined. Clearly, \(({\dot{p}},p)\) is a twisted groupoid homomorphism. As \(\varphi \) is injective, \(\varphi ^*\) is surjective, so that \({\dot{p}}\) is surjective.
To see that \({\dot{p}}\) is proper, let \(K \subseteq G\) be compact. Given \(n \in N\), write \(U(n) \mathrel {:=}\left\{ [\alpha _n(y),\alpha _n,y] \text {: }y \in \mathrm{dom\,}(n) \right\} \) and \(K(n) \mathrel {:=}(s \vert _{U(n)})^{-1}(s(K))\). As K is compact, there exists a finite set \(\left\{ n_i \right\} \subseteq N\) such that \(K \subseteq \bigcup _i U(n_i)\), so that \(K = \bigcup _i U(n_i) \cap K \subseteq \bigcup _i K(n_i)\). Now given \(m \in N\), \({\dot{p}}([x,\alpha _{\varphi (m)},y]) \in K(n)\) implies \(\varphi ^*(y) \in s(K)\), i.e., \(y \in (\varphi ^*)^{-1}(s(K))\), \({\dot{p}}([x,\alpha _{\varphi (m)},y]) = [\varphi ^*(x), \alpha _m, \varphi ^*(y)] = [\varphi ^*(x), \alpha _n, \varphi ^*(y)]\), so that \(P(n^*m)(\varphi ^*(y)) \ne 0\), which yields \({\hat{P}}(\varphi (n)^* \varphi (m))(y) = \varphi (P(n^* m))(y) \ne 0\), thus \([x,\alpha _{\varphi (m)},y] = [x,\alpha _{\varphi (n)},y]\). Hence \({\dot{p}}^{-1}(K(n)) \subseteq \left\{ [\alpha _{\varphi (n)}(y), \varphi (n), y] \text {: }y \in (\varphi ^*)^{-1}(s(K)) \right\} = (s \vert _{U(\varphi (n))})^{-1}((\varphi ^*)^{-1}(s(K)) \mathrel {=:}\breve{K}(n)\). As \(\varphi ^*\) is proper, \(\breve{K}(n)\) is compact for all \(n \in N\). Hence \({\dot{p}}^{-1}(K) \subseteq \bigcup _i {\dot{p}}^{-1}(K(n_i)) \subseteq \bigcup _i \breve{K}(n_i)\) is a closed subset of a compact set, thus compact itself.
Moreover, given \(y \in Y\), \({\dot{p}}([w,\alpha _{\varphi (m)},y]) = {\dot{p}}([x,\alpha _{\varphi (n)},y])\) implies \([\varphi ^*(w), \alpha _m, \varphi ^*(y)] = [\varphi ^*(x), \alpha _n, \varphi ^*(y)]\), so that \({\hat{P}}(\varphi (n)^* \varphi (m))(y) = P(n^* m)(\varphi ^*(y)) \ne 0\), so that \([w,\alpha _{\varphi (m)},y] = [x,\alpha _{\varphi (n)},y]\). This shows injectivity of \({\dot{p}} \vert _{H_y}\), and it is clear that \({\dot{p}}(H_y) = G_{{\dot{p}}(y)}\). Thus \({\dot{p}}\) is fibrewise bijective.
Finally, it is easy to see that we have a commutative diagram
where the vertical isomorphisms are as in [36, Definition 5.4].
In Proposition 5.4, \(\varphi \) sends full elements to full elements if and only if we have \(i(H^{(0)}) = {\hat{G}}^{(0)}\).
Theorem 1.10 now follows from [4, Theorem 3.6], Lemma 5.3, Proposition 5.4 and Remark 5.2.
The Weyl twist of \((\varinjlim \left\{ A_n; \varphi _n \right\} , \varinjlim \left\{ B_n; \varphi _n \right\} )\) in the situation of Theorem 1.10 is given by \(({\bar{G}},{\bar{\Sigma }})\) as given in (23) and (24).
If, in Theorem 1.10, all \(\varphi _n\) send full elements to full elements, then \(G_{n,m+1}^{(0)} = H_{n+m}^{(0)} = G_{n+m+i}^{(0)}\) (where we identify \(H_{n+m}^{(0)}\) with \(i_{n+m}(H_{n+m}^{(0)})\)), so that \({\bar{G}}_n^{(0)} = \varprojlim _m \{ G_{n,m}^{(0)};{\dot{p}}_{n+m} \} \cong \varprojlim \{ G_l^{(0)}; {\dot{p}}_l \}\) for all n, and thus \({\bar{i}}_n({\bar{G}}_n^{(0)}) = {\bar{G}}_{n+1}^{(0)}\) for all n, which implies \({\bar{G}}^{(0)} \cong \varprojlim \{ G_n^{(0)}; {\dot{p}}_n \}\).
If all \(B_n\) in Theorem 1.10 are C*-diagonals, i.e., all \(G_n\) are principal, then \({\bar{G}}\) is principal.
6 Groupoid models
6.1 The building blocks
We first present groupoid models for the building blocks that give rise to our AH-algebras (see Sect. 3). Let Z be a second countable compact Hausdorff space and let \(p_i \in M_{\infty }(C(Z))\) be a finite collection of line bundles over Z. Let \(P = \sum _i{}^{\oplus } \ p_i \in M_{\infty }(C(Z))\). The following is easy to check:
\(\bigoplus _i p_i M_{\infty }(C(Z)) p_i\) is a Cartan subalgebra of \(P M_{\infty }(C(Z)) P\).
Thus, by [36, Theorem 5.9], there exists a twisted groupoid \(({\dot{{\mathcal {F}}}},{\mathcal {F}})\) (the Weyl twist) such that
$$\begin{aligned} (C^*_r({\dot{{\mathcal {F}}}},{\mathcal {F}}), \, C_0({\dot{{\mathcal {F}}}}^{(0)})) \cong (P M_{\infty }(C(Z)) P, \, \bigoplus _i p_i M_{\infty }(C(Z)) p_i). \end{aligned}$$
Let us now describe \(({\dot{{\mathcal {F}}}},{\mathcal {F}})\) explicitly. Let R be the full equivalence relation on the finite set \(\left\{ p_i \right\} \) (just a set with the same number of elements as the number of line bundles). Let \({\dot{{\mathcal {F}}}} = Z \times R\), which is a groupoid in the canonical way. For every \(p_i\), let \(T_i\) be a circle bundle over Z such that \(p_i = {\mathbb {C}}\times _{{\mathbb {T}}} T_i\). We form the circle bundles \(T_j \cdot T_i^*\), which are given as follows: For each index i, let \(\left\{ V_{i,a} \right\} _a\) be an open cover of Z, and let \(v_{i,a}\) be a trivialization of \(T_i \vert _{V_{i,a}}\). We view \(v_{i,a}\) as a continuous map \(v_{i,a}: \, V_{i,a} \rightarrow M_{\infty }\) with values in partial isometries such that \(v_{i,a}(z)\) has source projection \(e_{11}\) and range projection \(p_i(z)\), so that \(v_{i,a}(z) = p_i(z) v_{i,a}(z) e_{11}\). Here \(e_{11}\) is the rank one projection in \(M_{\infty }\) which has zero entry everywhere except in the upper left (1, 1)-entry, where the value is 1. Then
$$\begin{aligned} T_j \cdot T_i^* = \Big ( \coprod _{c,a} {\mathbb {T}}\times (V_{j,c} \cap V_{i,a}) \Big ) \Big / { }_{\sim } \end{aligned}$$
where we define \((z,x) \sim (z',x')\) if \(x = x'\), and if \(x \in V_{j,c} \cap V_{i,a}\), \(x' \in V_{j,d} \cap V_{i,b}\), then \(z' = v_{i,b} v_{j,d}^* v_{j,c} v_{i,a}^* z\).
We set
$$\begin{aligned} {\mathcal {F}}\mathrel {:=}\coprod _{j,i} T_j \cdot T_i^*. \end{aligned}$$
Note that \(T_i \cdot T_i^*\) is just the trivial circle bundle \({\mathbb {T}}\times Z\). We define a multiplication on \({\mathcal {F}}\): For ([z, x], (j, i)) and \(([z',x'],(j',i'))\) in \({\mathcal {F}}\), we can only multiply these elements if \(x = x'\) and \(i = j'\). In that case, write \(h \mathrel {:=}i'\) and assume that \(x \in V_{j,c} \cap V_{i,b}\) and \(x' = x \in V_{i,b} \cap V_{h,a}\). Then we define the product as
$$\begin{aligned} ([z,x],(j,i)) \cdot ([z',x'],(j',i')) = ([zz',x],(j,h)). \end{aligned}$$
Moreover, \({\mathcal {F}}\) becomes a twist of \({\dot{{\mathcal {F}}}}\) via the map
$$\begin{aligned}&{\mathcal {F}}\rightarrow {\dot{{\mathcal {F}}}}, \, T_j \cdot T_i^* \ni \sigma \mapsto (\pi (\sigma ),(j,i)), \\&\mathrm{where} \ \pi : \, T_j \cdot T_i^* \rightarrow Z \ \text {is the canonical projection}. \end{aligned}$$
It is now straightforward to check (compare [36]) that the twisted groupoid \(({\dot{{\mathcal {F}}}},{\mathcal {F}})\) is precisely the Weyl twist of \((P M_{\infty }(C(Z)) P, \, \bigoplus _i p_i M_{\infty }(C(Z)) p_i)\). More precisely, we have the following
We have a C(Z)-linear isomorphism \(C^*_r({\dot{{\mathcal {F}}}},{\mathcal {F}}) \cong P M_{\infty }(C(Z)) P\) sending \(\tilde{f} \in C_c({\dot{{\mathcal {F}}}},{\mathcal {F}})\) with \(\mathrm{supp}(\tilde{f}) \subseteq \left( V_{j,c} \cap V_{i,a} \right) \times \left\{ (j,i) \right\} \subseteq {\dot{{\mathcal {F}}}}\) to \(f v_{j,c} v_{i,a}^*\), where \(f \in C(Z)\) is determined by \(\tilde{f}(([z,x],(j,i)) = {\bar{z}} f(x)\). Moreover, this C(Z)-linear isomorphism identifies \(C({\dot{{\mathcal {F}}}}^{(0)})\) with \(\bigoplus _i p_i M_{\infty }(C(Z)) p_i\).
Let us now fix n, and apply the result above to the homogeneous C*-algebra \(F \mathrel {:=}F_n\) from Sect. 4.2 to obtain a twisted groupoid \(({\dot{{\mathcal {F}}}},{\mathcal {F}})\) such that \(C^*_r({\dot{{\mathcal {F}}}},{\mathcal {F}}) \cong F\). More precisely, we apply our construction above to the summand of F corresponding to the component of \(\theta _n^{\varvec{i}}\). Note that in the construction above, all our line bundles satisfy
$$\begin{aligned} p_i(\theta _n^{\varvec{i}}) = e_{11} \end{aligned}$$
because of (7). For the other summands, it is easy to construct a groupoid model, as these are just matrix algebras, so that we can just take the full equivalence relation on finite sets.
Now our goal is to present a groupoid model for the building block \(A \mathrel {:=}A_n\) in Sect. 4.2. Let \({\mathcal {R}}\) be an equivalence relation (on a finite set) such that \(C^*({\mathcal {R}}) \cong E \mathrel {:=}E_n\). Write \({\mathcal {R}}= \coprod _p {\mathcal {R}}^p\) for subgroupoids \({\mathcal {R}}^p\) such that the isomorphism \(C^*({\mathcal {R}}) \cong E\) restricts to isomorphisms \(C^*({\mathcal {R}}^p) \cong E^p \mathrel {:=}E_n^p\). Set \({\dot{{\mathcal {C}}}} \mathrel {:=}[0,1] \times {\mathcal {R}}\). Then \(C^*_r({\dot{{\mathcal {C}}}})\) is canonically isomorphic to \(C \mathrel {:=}C([0,1],E)\). Consider the trivial twist \({\mathcal {C}}\mathrel {:=}{\mathbb {T}}\times {\dot{{\mathcal {C}}}}\) of \({\dot{{\mathcal {C}}}}\). Clearly, we have \(C^*_r({\dot{{\mathcal {C}}}} \amalg {\dot{{\mathcal {F}}}}, {\mathcal {C}}\amalg {\mathcal {F}}) \cong C \oplus F\).
For \(t=0,1\) and \(\beta _t\) as in Sect. 4.2, write
$$\begin{aligned} F \overset{\beta _t}{\longrightarrow } E \twoheadrightarrow E^p \end{aligned}$$
as the composition
$$\begin{aligned} F \rightarrow \bigoplus _l M_{n_l} \otimes {\mathbb {C}}^{I_l^p} \hookrightarrow E^p, \end{aligned}$$
where each of the components \(F \rightarrow M_{n_l} \otimes {\mathbb {C}}^{I_l^p}\) of the first map is given by
$$\begin{aligned} a \mapsto \begin{pmatrix} a(\theta ^l) &{} &{} \\ &{} a(\theta ^l) &{} \\ &{} &{} \ddots \end{pmatrix}, \end{aligned}$$
with \(\# I_l^p\) copies of \(a(\theta ^l)\) on the diagonal, and the components \(M_{n_l} \otimes {\mathbb {C}}^{I_l^p} \hookrightarrow E^p\) of the second map are given by
$$\begin{aligned} \varvec{x} \mapsto u^* \begin{pmatrix} \varvec{x} &{} &{} &{} \\ &{} 0 &{} &{} \\ &{} &{} \ddots &{} \\ &{} &{} &{} 0 \end{pmatrix} u, \end{aligned}$$
where u is a permutation matrix.
Let \(E_t^p\) be the image of \(\bigoplus _l M_{n_l} \otimes {\mathbb {C}}^{I_l^p}\) in E, and set \(E_t \mathrel {:=}\bigoplus _p E_t^p \subseteq E\), for \(t=0,1\). Let \({\mathcal {R}}_t^p \subseteq {\mathcal {R}}^p\) be subgroupoids such that the identification \(C^*({\mathcal {R}}^p) \cong E^p\) restricts to \(C^*({\mathcal {R}}_t^p) \cong E_t^p\). Write \({\mathcal {R}}_t \mathrel {:=}\coprod _p {\mathcal {R}}_t^p\), so that \(C^*({\mathcal {R}}) \cong E\) restricts to \(C^*({\mathcal {R}}_t) \cong E_t\). Let \(\sigma _t^p\) be the groupoid isomorphism \(\coprod _l {\mathcal {R}}_l \times I_l^p \cong {\mathcal {R}}_t^p\), given by a bijection of the finite unit space, corresponding to conjugation by the unitary u in (27). Let \(V_{i,a}\) and \(v_{i,a}\) be as above (introduced after Lemma 6.1). We now define a map \(\varvec{b}_t: \, {\mathbb {T}}\times (\left\{ t \right\} \times {\mathcal {R}}_t) \rightarrow \Sigma \) as follows: Given an index l and \((j,i) \in {\mathcal {R}}_l\), choose indices a and c such that \(\theta ^l \in V_{j,c} \cap V_{i,a}\). Then define
$$\begin{aligned} z_{j,i} \mathrel {:=}v_{j,c}(\theta ^l) v_{i,a}(\theta ^l)^* \in {\mathbb {T}}. \end{aligned}$$
Here, we are using (25). If \(\theta ^l\) is not the distinguished point \(\theta _n^{\varvec{i}}\), then we set \(z_{j,i} = 1\). For \(z \in {\mathbb {T}}\) and \(h \in I_l^p\), set
$$\begin{aligned}&\varvec{b}_t(z,t,\sigma _t^p((j,i),h)) \mathrel {:=}[z_{j,i},\theta ^l] \in T_j \cdot T_i^* \subseteq \Sigma , \\&\text {where we view} \ (z_{j,i},\theta ^l) \ \text {as an element in} \ {\mathbb {T}}\times (V_{j,c} \cap V_{i,a}). \end{aligned}$$
$$\begin{aligned}&{\check{\Sigma }} \mathrel {:=}\left\{ x \in {\mathcal {C}}\amalg {\mathcal {F}} \text {: }x = (z,t,\gamma ) \in {\mathbb {T}}\times [0,1] \times {\mathcal {R}}\Rightarrow \ \gamma \in {\mathcal {R}}_t \ \mathrm{for} \ t = 0,1 \right\} \nonumber \\&\quad \mathrm{and} \ \Sigma \mathrel {:=}{\check{\Sigma }} / { }_{\sim } \end{aligned}$$
where \(\sim \) is the equivalence relation on \({\check{\Sigma }}\) generated by \((z,t,\gamma ) \sim \varvec{b}_t(z,t,\gamma )\) for all \(z \in {\mathbb {T}}\), \(t=0,1\) and \(\gamma \in {\mathcal {R}}_t\). \({\check{\Sigma }}\) and \(\Sigma \) are principal \({\mathbb {T}}\)-bundles belonging to twisted groupoids, and we denote the underlying groupoids by \(\check{G}\) and G.
By construction, the canonical projection and inclusion \(\Sigma \twoheadleftarrow {\check{\Sigma }} \hookrightarrow {\mathcal {C}}\amalg {\mathcal {F}}\) induce on the level of C*-algebras
In particular, \((G,\Sigma )\) is the desired groupoid model for our building block.
In what follows, it will be necessary to keep track of the index n, so that we will consider, for all n, twisted groupoids \(({\dot{{\mathcal {C}}}}_n \amalg {\dot{{\mathcal {F}}}}_n, {\mathcal {C}}_n \amalg {\mathcal {F}}_n)\), \((\check{G}_n,{\check{\Sigma }}_n)\), \((G_n,\Sigma _n)\) describing the C*-algebras \(C_n \oplus F_n\), \(\check{A}_n\) and \(A_n\) as explained above. Moreover, for all n, let \(B_n \subseteq A_n\) be the subalgebra corresponding to \(C_0(G_n^{(0)})\) under the isomorphism \(C^*_r(G_n,\Sigma _n) \cong A_n\).
6.2 The connecting maps
Let us now describe the connecting maps \(\varphi _n: \, A_n \rightarrow A_{n+1}\) in the groupoid picture above. Let \(P_{n+1}^{\varvec{a}}\), \(\varphi _n^{\varvec{a}}\) be as in Sect. 4.2, so that \(\varphi _n = \bigoplus _{\varvec{a}} \varphi _n^{\varvec{a}}\) and \(\mathrm{im\,}(\varphi _n^{\varvec{a}}) \subseteq P_{n+1}^{\varvec{a}} A_{n+1} P_{n+1}^{\varvec{a}}\). Also, let \(\Phi _n: \, C_n \oplus F_n \rightarrow C_{n+1} \oplus F_{n+1}\) be the extension of \(\varphi _n\) as in Remark 4.1. Set \(\Phi _n^{\varvec{a}}: \, C_n \oplus F_n \rightarrow P_{n+1}^{\varvec{a}}(C_{n+1} \oplus F_{n+1}) P_{n+1}^{\varvec{a}}, \, x \mapsto P_{n+1}^{\varvec{a}} \varphi _n^{\varvec{a}}(x) P_{n+1}^{\varvec{a}}\). We obtain \({\check{\varphi }}_n: \, \check{A}_n \rightarrow \check{A}_{n+1}\) and \({\check{\varphi }}_n^{\varvec{a}}: \, \check{A}_n \rightarrow P_{n+1}^{\varvec{a}} \check{A}_{n+1} P_{n+1}^{\varvec{a}}\) by restricting \(\Phi _n\) and \(\Phi _n^{\varvec{a}}\). Set
$$\begin{aligned}&(C \oplus F)[\Phi _n] \mathrel {:=}\Big \{ x \in C_{n+1} \oplus F_{n+1}: \, x = \sum _{\varvec{a}} P_{n+1}^{\varvec{a}} x P_{n+1}^{\varvec{a}} \Big \},\\&\check{A}[{\check{\varphi }}_n^{\varvec{a}}] \mathrel {:=}\mathrm{im\,}({\check{\varphi }}_n^{\varvec{a}}),\\&\check{A}[\varphi _n] \mathrel {:=}\Big \{ x \in \check{A}_{n+1}: \, x = \sum _{\varvec{a}} P_{n+1}^{\varvec{a}} x P_{n+1}^{\varvec{a}}, \, P_{n+1}^{\varvec{a}} x P_{n+1}^{\varvec{a}} \in \check{A}[{\check{\varphi }}_n^{\varvec{a}}] \Big \},\\&A[\varphi _n] \mathrel {:=}A_{n+1} \cap \check{A}[{\check{\varphi }}_n]. \end{aligned}$$
Note that \(\check{A}[{\check{\varphi }}_n^{\varvec{a}}] = P_{n+1}^{\varvec{a}} F_{n+1} P_{n+1}^{\varvec{a}}\) if \(P_{n+1}^{\varvec{a}} \in F_{n+1}\) and \(\check{A}[{\check{\varphi }}_n^{\varvec{a}}] = \left\{ x \in P_{n+1}^{\varvec{a}} \check{A}_{n+1} P_{n+1}^{\varvec{a}} \text {: }x(t) \in \mathrm{im\,}({\text {ev}}_t \circ {\check{\varphi }}_n^{\varvec{a}}) \ \mathrm{for} \ t = 0,1 \right\} \) if \(P_{n+1}^{\varvec{a}} \in C_{n+1}\).
Let \({\mathcal {T}}_n\) be the open subgroupoid of \({\mathcal {C}}_{n+1} \amalg {\mathcal {F}}_{n+1}\), with \({\dot{{\mathcal {T}}}}_n \subseteq {\dot{{\mathcal {C}}}}_{n+1} \amalg {\dot{{\mathcal {F}}}}_{n+1}\) correspondingly, such that \(C^*_r({\dot{{\mathcal {C}}}}_{n+1} \amalg {\dot{{\mathcal {F}}}}_{n+1}, {\mathcal {C}}_{n+1} \amalg {\mathcal {F}}_{n+1}) \cong C_{n+1} \oplus F_{n+1}\) restricts to \(C^*_r({\dot{{\mathcal {T}}}}_n,{\mathcal {T}}_n) \cong (C \oplus F)[\Phi _n]\). Similarly, let \(\check{T}_n\) be the open subgroupoid of \({\check{\Sigma }}_{n+1}\), with \(\check{H}_n \subseteq \check{G}_{n+1} \) correspondingly, such that \(C^*_r(\check{G}_{n+1}, {\check{\Sigma }}_{n+1}) \cong \check{A}_{n+1}\) restricts to \(C^*_r(\check{H}_n,\check{T}_n) \cong \check{A}[{\check{\varphi }}_n]\). For \(\eta \in \check{T}_n\) and \(\eta ' \in {\check{\Sigma }}_{n+1}\), \(\eta \sim \eta '\) implies that \(\eta '\) lies in \(\check{T}_n\). It follows that \(T_n = \check{T}_n / { }_{\sim }\) is an open subgroupoid of \(\Sigma _{n+1}\). Define \(H_n = \check{H}_n / { }_{\sim }\) in a similar way. By construction, the commutative diagram at the groupoid level
induces at the C*-level
Let \(\check{T}_n = \coprod _{\varvec{a}} \check{T}_n^{\varvec{a}}\) and \(\check{H}_n = \coprod _{\varvec{a}} \check{H}_n^{\varvec{a}}\) be the decompositions into subgroupoids such that the identification \(C^*_r(\check{H}_n, \check{T}_n) \cong \check{A}[{\check{\varphi }}_n] \subseteq \check{A}_{n+1}\) restricts to \(C^*_r(\check{H}_n^{\varvec{a}}, \check{T}_n^{\varvec{a}}) \cong \check{A}[{\check{\varphi }}_n^{\varvec{a}}]\). For fixed n and every \(\varphi ^{\varvec{a}} = \varphi ^{\varvec{a}}_n\) from our list in Sect. 4.2, we now construct a map \(p^{\varvec{a}}: \, \check{T}_n^{\varvec{a}} \rightarrow \Sigma _n\) such that
commutes.
Recall that \({\check{\Sigma }}_n \subseteq {\mathcal {C}}_n \amalg {\mathcal {F}}_n = ({\mathbb {T}}\times [0,1] \times {\mathcal {R}}_n) \amalg {\mathcal {F}}_n\). Also, we denote the canonical projection \({\mathcal {F}}_n \twoheadrightarrow {\dot{{\mathcal {F}}}}_n\) by \(\sigma \mapsto {\dot{\sigma }}\).
For \(\varphi ^{\varvec{a}}\) as in (16), let \(p^{\varvec{a}}\) be the composite
$$\begin{aligned} \check{T}_n^{\varvec{a}} \cong {\mathbb {T}}\times [0,1] \times {\mathcal {R}}_n&\rightarrow {\check{\Sigma }}_n \overset{q}{\longrightarrow } \Sigma _n,\\ (z,t,\gamma )&\mapsto (z,\lambda (t),\gamma ) \end{aligned}$$
where we note that the first map has image in \({\check{\Sigma }}_n\), so that we an apply the quotient map \(q: \, {\check{\Sigma }}_n \twoheadrightarrow \Sigma _n\).
$$\begin{aligned} \check{T}_n^{\varvec{a}} \cong {\mathbb {T}}\times [0,1] \times {\mathcal {R}}_n&\rightarrow {\mathcal {F}}_n \overset{q}{\longrightarrow } \Sigma _n,\nonumber \\ (z,t,\gamma )&\mapsto z \cdot \sigma (t,\gamma ) \end{aligned}$$
where \(\sigma \) is a continuous groupoid homomorphism such that \({\dot{\sigma }}(t,\gamma ) = (x(t),\gamma )\). For \(x(t) = \theta ^l \in \left\{ \theta _n^i \right\} \) and \(\gamma = (j,i)\), write
$$\begin{aligned} \sigma (t,\gamma ) = [z_{j,i},\theta ^l], \end{aligned}$$
which has to match up with (28).
$$\begin{aligned} \check{T}_n^{\varvec{a}} \cong {\mathbb {T}}\times \{ \theta _{n+1}^j \} \times {\mathcal {R}}_n&\rightarrow {\check{\Sigma }}_n \overset{q}{\longrightarrow } \Sigma _n,\\ (z,\theta _{n+1}^j,\gamma )&\mapsto (z,\varvec{t},\gamma ). \end{aligned}$$
$$\begin{aligned} \check{T}_n^{\varvec{a}} \cong {\mathbb {T}}\times \{ \theta _{n+1}^j \} \times ({\dot{{\mathcal {F}}}}_n)^x_x&\rightarrow {\mathcal {F}}_n \overset{q}{\longrightarrow } \Sigma _n,\\ (z,\theta _{n+1}^j,\gamma )&\mapsto z \cdot \sigma (\gamma ), \end{aligned}$$
where \(\sigma : \, ({\dot{{\mathcal {F}}}}_n)^x_x \rightarrow {\mathcal {F}}_n\) is a groupoid homomorphism with \({\dot{\sigma }}(\gamma ) = (x,\gamma )\) matching up with \(\sigma \) in (29).
$$\begin{aligned} \check{T}_n^{\varvec{a}} \cong {\mathbb {T}}\times \{ \theta _{n+1}^j \} \times ({\dot{{\mathcal {F}}}}_n)^{\theta _n^i}_{\theta _n^i}&\rightarrow {\mathcal {F}}_n \overset{q}{\longrightarrow } \Sigma _n,\\ (z,\theta _{n+1}^j,\gamma )&\mapsto (z,\theta _n^i,\gamma ). \end{aligned}$$
$$\begin{aligned} \check{T}_n^{\varvec{a}} \cong {\mathbb {T}}\times Z_{n+1} \times ({\dot{{\mathcal {F}}}}_n)^{\theta _n^i}_{\theta _n^i} \twoheadrightarrow {\mathbb {T}}\times ({\dot{{\mathcal {F}}}}_n)^{\theta _n^i}_{\theta _n^i}&\rightarrow {\mathcal {F}}_n \overset{q}{\longrightarrow } \Sigma _n,\\ (z,\gamma )&\mapsto z \cdot \sigma (\gamma ), \end{aligned}$$
where \(\sigma : \, ({\dot{{\mathcal {F}}}}_n)^{\theta _n^i}_{\theta _n^i} \rightarrow {\mathcal {F}}_n\) is a groupoid homomorphism with \({\dot{\sigma }}(\gamma ) = (\theta _n^i,\gamma )\) matching up with (28), just as (30).
For \(\varphi ^{\varvec{a}}\) as in (22), we have \(C^*_r(\check{H}_n^{\varvec{a}}, \check{T}_n^{\varvec{a}}) \cong \left( \sum _i \lambda ^*(p_i) \right) \cdot F_{n+1} \cdot \left( \sum _i \lambda ^*(p_i) \right) \), where \(p_i\) are the line bundles such that \(P_n = \sum _i p_i\) (see Sect. 3 and Sect. 6.1), and \(p^{\varvec{a}}\) is the composite
$$\begin{aligned} \check{T}_n^{\varvec{a}}&\rightarrow {\mathcal {F}}_n \overset{q}{\longrightarrow } \Sigma _n,\\ [z,x]&\mapsto [z,\lambda (x)], \end{aligned}$$
with \((z,x) \in {\mathbb {T}}\times \lambda ^{-1}(V_{i,a})\) and \((z,\lambda (x)) \in {\mathbb {T}}\times V_{i,a}\), where for a given open cover \(V_{i,a}\) and trivialization \(v_{i,a}\) for \({\mathcal {F}}_n\), we choose the open cover \(\lambda ^{-1}(V_{i,a})\) and trivialization \(v_{i,a} \circ \lambda \) for \(\check{T}_n^{\varvec{a}}\) (see Sect. 6.1).
The homomorphism
$$\begin{aligned} \coprod _{\varvec{a}} p^{\varvec{a}}: \, \check{T}_n = \coprod _{\varvec{a}} \check{T}_n^{\varvec{a}} \rightarrow \Sigma _n \end{aligned}$$
must descend to \(p_n: \, T_n \rightarrow \Sigma _n\) because \(C^*_r(\coprod _{\varvec{a}} p^{\varvec{a}}): \, C^*_r(G_n, \Sigma _n) \rightarrow C^*_r(\check{H}_n, \check{T}_n), \, f \mapsto f \circ (\coprod _{\varvec{a}} p^{\varvec{a}})\) lands in \(C^*_r(H_n, T_n)\). Moreover, the homomorphisms \(\Phi _n\) and \({\check{\varphi }}_n\) admit similar groupoid models (say \({\mathcal {P}}_n\) and \(\check{p}_n\)) as \(\varphi _n\), so that we obtain a commutative diagram
Proofs of Theorems 1.2 and 1.3
All we have to do is to check the conditions in Theorem 1.10, using Proposition 5.4 and the groupoid models in Sect. 6. We treat the unital and stably projectionless cases simultaneously. Given a prescribed Elliott invariant, let \(A_n\) and \(\varphi _n\) be as in Sect. 4.2. Consider the groupoid models for \(A_n\) and \(\varphi _n\) in Sect. 6. First of all, by construction, \((H_n,T_n)\) is a subgroupoid of \((G_{n+1},\Sigma _{n+1})\) and \(H_n \subseteq G_{n+1}\) is open. Let \((i_n, \imath _n)\) be the canonical inclusion. Secondly, \(p_n\) is proper because all the \(p^{\varvec{a}}\) in Sect. 6.2 are proper (they are closed, and pre-images of points are compact). Thirdly, \(p_n\) is fibrewise bijective because this is true for \(\check{p}_n\) and the canonical projections \({\check{\Sigma }}_n \twoheadrightarrow \Sigma _n\), \(\check{T}_n \twoheadrightarrow T_n\). By construction, all the connecting maps \(\varphi _n\) in Sect. 4.2 are of the form \(\varphi _n = (\imath _n)_* \circ (p_n)^*\). Thus, by Proposition 5.4, the conditions in Theorem 1.10 are satisfied. Hence \(\varinjlim \left\{ B_n; \varphi _n \right\} \), with \(B_n\) as in Sect. 6.1, is a Cartan subalgebra of \(\varinjlim \left\{ A_n; \varphi _n \right\} \), and actually even a C*-diagonal by Remark 5.6 because all \(G_n\) are principal.
By Remark 5.6, the twisted groupoids \((G,\Sigma )\) we obtain in the proofs of Theorems 1.2 and 1.3 are given by the Weyl twists described by (23) and (24). Moreover, it is easy to see that for the groupoids in Sect. 6, we have \(({\mathcal {C}}_n \amalg {\mathcal {F}}_n)^{(0)} = \check{G}_n^{(0)}\), and since \(\Phi _n\), \({\check{\varphi }}_n\) and \(\varphi _n\) send full elements to full elements, \({\dot{{\mathcal {T}}}}_n^{(0)} = ({\mathcal {C}}_n \amalg {\mathcal {F}}_n)^{(0)}\), \(\check{H}_n^{(0)} = \check{G}_{n+1}^{(0)}\) and \(H_n^{(0)} = G_{n+1}^{(0)}\), for all n (by Remark 5.5). So Remark 5.6 tells us that \(G^{(0)} \cong \varprojlim \{ G_n^{(0)}; {\dot{p}}_n \}\).
We now turn to the additional statements in Sect. 1. In order to prove Corollaries 1.6 and 1.7 , we need to show the following statement. In both the unital and stably projectionless cases, let \(A = C^*_r(G,\Sigma )\), \(D = C_0(G^{(0)})\), and \(\gamma = \tilde{\gamma } \vert _T\) be as in Corollaries 1.6 and 1.7 . Let \({\mathcal {C}}\) be the canonical diagonal subalgebra of the algebra of compact operators \({\mathcal {K}}\).
There exists a positive element \(a \in D \otimes {\mathcal {C}}\subseteq A \otimes {\mathcal {K}}\) such that \(d_{\bullet }(a) = \gamma \).
Here \(d_{\bullet }(a)\) denotes the function \(T \ni \tau \mapsto d_{\tau }(a)\). For the proof, we need the following
Given a continuous affine map \(g: \, T \rightarrow (0,\infty )\) and \(\varepsilon > 0\), there exists \(z \in D \otimes D_k \subseteq A \otimes M_k \subseteq A \otimes {\mathcal {K}}\) with \(\left\| z\right\| = 1\), \(z \ge 0\), \(z \in \mathrm{Ped}(A \otimes {\mathcal {K}})\) such that \(g - \varepsilon< d_{\bullet }(z) < g + \varepsilon \).
Here \(D_k\) is the canonical diagonal subalgebra of \(M_k\).
We treat the unital and stably projectionless cases simultaneously. Let \({\hat{F}}_n\) be as in Sect. 2.1 and \({\hat{F}} \mathrel {:=}\varinjlim {\hat{F}}_n\). Choose \(a \in {\hat{F}} \otimes {\mathcal {K}}\) with \(a \ge 0\) and \(d_{\bullet }(a) = g\). Then we can choose \(b \in {\hat{F}}_n \otimes M_k\) (for n big enough) with \(b \ge 0\), \(d_{\bullet }(b)\) continuous and
$$\begin{aligned} g - \varepsilon< d_{\bullet }(b) < g + \varepsilon . \end{aligned}$$
Using [1, Theorem 3.1] just as in [37, Proof of (6.2) and (6.3)], choose \(c \in D({\hat{F}}_n) \otimes D_k\) with \(c \ge 0\) such that c and b are Cuntz equivalent, where \(D({\hat{F}}_n)\) is the canonical diagonal subalgebra of \({\hat{F}}_n\). Choose \(d \in \mathrm{Ped}(A_n \otimes M_k)\) with \(d \in D(A_n) \otimes D_k\) such that \((\pi \otimes \mathrm{id})(d) = c \), where \(\pi : \, A_n \twoheadrightarrow F_n \twoheadrightarrow {\hat{F}}_n\) is the canonical projection. Let z denote the image of d under the canonical map \(A_n \otimes M_k \rightarrow A \otimes M_k\). Then \(z \in D \otimes D_k\). It is now straightforward to check, using the isomorphism \(T(A) \cong T({\hat{F}})\) from [20, § 13], that z has the desired properties.
Proof of Proposition 7.2
There is a sequence \((\gamma _i)\) of continuous affine maps \(T \rightarrow [0,\infty )\) with \(\gamma _i \nearrow \gamma - \min (\gamma )\). Choose \(\varepsilon _i > 0\) such that \(\sum _i \varepsilon _i = \min (\gamma )\). Define \(f_i \mathrel {:=}\gamma _i + \sum _{h=1}^{i-1} \varepsilon _h\). Then \(f_i \nearrow \gamma \) and \(f_i > 0\). Moreover,
$$\begin{aligned} f_{i+1} = \gamma _{i+1} + \sum _{h=1}^i \varepsilon _h \ge \gamma _i + \Big ( \sum _{h=1}^{i-1} \varepsilon _h \Big ) + \varepsilon _i = f_i + \varepsilon _i. \end{aligned}$$
Using Lemma 7.3, proceed inductively on i to find \(z_i \in D \otimes D_{k(i)}\) such that
$$\begin{aligned} \Big ( f_{i+1} - \sum _{h=1}^i d_{\bullet }(z_h) \Big ) - \varepsilon _{i+1}< d_{\bullet }(z_{i+1}) < \Big ( f_{i+1} - \sum _{h=1}^i d_{\bullet }(z_h) \Big ) + \varepsilon _{i+1}. \end{aligned}$$
Note that \(f_{i+1} - \sum _{h=1}^i d_{\bullet }(z_h) > 0\) since \(\sum _{h=1}^i d_{\bullet }(z_h) < f_i + \varepsilon _i \le f_{i+1}\).
By construction, we have
$$\begin{aligned} f_i - \varepsilon _i< \sum _{h=1}^i d_{\bullet }(z_h) < f_i + \varepsilon _i, \ \text {so that} \ \sum _{h=1}^i d_{\bullet }(z_h) \nearrow \gamma . \end{aligned}$$
Now set
$$\begin{aligned}&a \mathrel {:=}\sum \nolimits _{h=1}^{\infty }{}^{\oplus } \ \ 2^{-h} z_h, \ \text {where we put the elements} \ 2^{-h} z_h \nonumber \\&\quad \text {on the diagonal in} \ D \otimes {\mathcal {C}}. \end{aligned}$$
In this way, we obtain an element \(a \in D \otimes {\mathcal {C}}\subseteq A \otimes {\mathcal {K}}\) with \(d_{\bullet }(a) = \gamma \).
Proof of Corollaries 1.6 and 1.7
Given \(\tilde{\gamma }\) as in Corollaries 1.6 and 1.7, let \(\gamma = \tilde{\gamma } \vert _T\). Using Proposition 7.2, choose a positive element \(a \in D \otimes {\mathcal {C}}\) with \(d_{\bullet }(a) = \gamma \). In the unital case, it is straightforward to check that we can always arrange a to be purely positive. Then it is straightforward to check that \((\overline{a (A \otimes {\mathcal {K}}) a}, \overline{a (D \otimes {\mathcal {C}}) a})\) is a Cartan pair. Hence, by [36, Theorem 5.9], there is a twisted groupoid \((\tilde{G},\tilde{\Sigma })\) such that \((C^*_r(\tilde{G},\tilde{\Sigma }), C_0(\tilde{G}^{(0)})) \cong (\overline{a (A \otimes {\mathcal {K}}) a}, \overline{a (D \otimes {\mathcal {C}}) a})\). It is now easy to see (compare also [19, Corollary 6.12]) that \((\tilde{G},\tilde{\Sigma })\) has all the desired properties.
Proofs of Corollaries 1.8 and 1.9
(i) follows from the observation that we only need the twist if \(G_0\) has torsion. The claims in (ii)–(iv) about subhomogeneous building blocks and their spectra follow immediately from our constructions (see also Remark 3.9). Moreover, the inverse limit description of the unit space in Remark 7.1 and the dimension formula for inverse limits (see for instance [17, Chapter 3, § 5.3, Theorem 22]) imply that \(\mathrm{dim}\,(G^{(0)}) \le 3\) in (ii), \(\mathrm{dim}\,(G^{(0)}) \le 2\) in (iii) and \(\mathrm{dim}\,(G^{(0)}) \le 1\) in (iv) and (v). Since \(C_0(G^{(0)})\) is projectionless in Theorem 1.3, we obtain \(\mathrm{dim}\,(G^{(0)}) \ne 0\), which forces \(\mathrm{dim}\,(G^{(0)}) = 1\) in (v), in the situation of Theorem 1.3. In particular, this shows that \({\mathcal {W}}\) and \({\mathcal {Z}}_0\) have C*-diagonals with one-dimensional spectra. Similarly, given a groupoid G with \({\mathcal {Z}}\cong C^*_r(G)\), the only projections in \(C(G^{(0)})\) are 0 and 1, so that \(\mathrm{dim}\,(G^{(0)}) \ne 0\) and hence \(\mathrm{dim}\,(G^{(0)}) = 1\). It remains to prove that \(\mathrm{dim}\,(G^{(0)}) \ge 3\) in (ii), \(\mathrm{dim}\,(G^{(0)}) \ge 2\) in (iii) and \(\mathrm{dim}\,(G^{(0)}) \ge 1\) in (iv).
To do so, let us use the same notation as in Sect. 6, and write \(X_n \mathrel {:=}G_n^{(0)}\), \(Q_n \mathrel {:=}{\dot{{\mathcal {C}}}}_n^{(0)}\), and \(W_n \mathrel {:=}{\dot{{\mathcal {F}}}}_n^{(0)}\). Clearly, \(Q_n\) is homotopy equivalent to a finite set of points, so that for any cohomology theory \(H^{\bullet }\) (satisfying the Eilenberg–Steenrod axioms, see [40, Chapter 17]), we have
$$\begin{aligned} H^{\bullet }(Q_n) \cong \left\{ 0 \right\} \ \mathrm{whenever} \ \bullet \ge 1. \end{aligned}$$
Let \(P_n \mathrel {:=}\left\{ (t,x) \in Q_n \text {: }t \in \left\{ 0,1 \right\} , \, x \in {\mathcal {R}}_t^{(0)} \right\} \). Then we have a pushout diagram
where \(P_n \rightarrow W_n\) is induced by \(\varvec{b}_t\) and the left vertical arrow is the canonical inclusion. The long exact (Mayer-Vietoris type) sequence attached to the pushout reads
$$\begin{aligned} \cdots \rightarrow H^{\bullet - 1}(P_n) \rightarrow H^{\bullet }(X_n) \rightarrow H^{\bullet }(Q_n) \times H^{\bullet }(W_n) \rightarrow H^{\bullet }(P_n) \rightarrow H^{\bullet + 1}(X_n) \rightarrow \cdots . \end{aligned}$$
Since \(H^{\bullet }(P_n) \cong \left\{ 0 \right\} \) and \(H^{\bullet }(Q_n) \cong \left\{ 0 \right\} \) (see (31)), we deduce that the canonical map \(W_n \rightarrow X_n\) induces a surjection \(H^{\bullet }(X_n) \rightarrow H^{\bullet }(W_n)\) for \(\bullet \ge 1\). Moreover, the map
$$\begin{aligned} Q_{n+1} \amalg W_{n+1} = ({\dot{{\mathcal {C}}}}_{n+1} \amalg {\dot{{\mathcal {F}}}}_{n+1})^{(0)} = \check{G}_{n+1}^{(0)} = \check{H}_n^{(0)} \overset{{\hat{p}}_n}{\longrightarrow } {\hat{G}}_n^{(0)} = Q_n \amalg W_n \end{aligned}$$
induces for \(\bullet \ge 1\) a homomorphism \(H^{\bullet }(\check{p}_n): \, H^{\bullet }(W_n) \rightarrow H^{\bullet }(W_{n+1})\) which fits into the commutative diagram
Thus the canonical maps \(W_n \rightarrow X_n\) induce for all \(\bullet \ge 1\) surjections
$$\begin{aligned} \check{H}^{\bullet }(G^{(0)}) \cong \varinjlim \left\{ H^{\bullet }(X_n); H^{\bullet }(p_n) \right\} \twoheadrightarrow \varinjlim \left\{ H^{\bullet }(W_n); H^{\bullet }(\check{p}_n) \right\} . \end{aligned}$$
Here \(\check{H}^{\bullet }\) is Čech cohomology, and the first identification follows from the inverse limit description of \(G^{(0)}\) in Remark 7.1 and continuity of Čech cohomology. By construction, \(W_n = Z_n \times I_n\) for some finite set \(I_n\) and \(Z_n\) is as in Sect. 4.2. Now it is an immediate consequence of our construction that \(\varinjlim \left\{ H^{\bullet }(W_n); H^{\bullet }(\check{p}_n) \right\} \) surjects onto \(\mathrm{Tor}(G_1)\) in case (ii) for \(\bullet = 3\), \(\mathrm{Tor}(G_0)\) in case (iii) for \(\bullet = 2\), and \(G_1\) in case (iv) for \(\bullet = 1\). Hence it follows that \(\check{H}^3(G^{(0)}) \ncong \left\{ 0 \right\} \) in case (ii), \(\check{H}^2(G^{(0)}) \ncong \left\{ 0 \right\} \) in case (iii), and \(\check{H}^1(G^{(0)}) \ncong \left\{ 0 \right\} \) in case (iv). As cohomological dimension is always a lower bound for covering dimension, this implies \(\mathrm{dim}\,(G^{(0)}) \ge 3\) in case (ii), \(\mathrm{dim}\,(G^{(0)}) \ge 2\) in case (iii), and \(\mathrm{dim}\,(G^{(0)}) \ge 1\) in case (iv), as desired.
Let us describe concrete groupoid models for the Jiang–Su algebra \({\mathcal {Z}}\), the Razak–Jacelon algebra \({\mathcal {W}}\) and the stably projectionless version \({\mathcal {Z}}_0\) of the Jiang–Su algebra as in [19, Definition 7.1]. These C*-algebras can be constructed in a way which fits into the framework of Sect. 4.2, so that our general machinery in Sect. 5 produces groupoid models as in Sect. 6. In the following, we focus on \({\mathcal {Z}}\).
First we recall the original construction of \({\mathcal {Z}}\) in [23]. For every \(n \in {\mathbb {N}}\), choose natural numbers \(p_n\) and \(q_n\) such that they are relatively prime, with \(p_n \mid p_{n+1}\) and \(q_n \mid q_{n+1}\), such that \(\frac{p_{n+1}}{p_n} > 2 q_n\) and \(\frac{q_{n+1}}{q_n} > 2 p_n\). Then \({\mathcal {Z}}= \varinjlim \left\{ A_n; \varphi _n \right\} \), where \(A_n = \left\{ (f,a) \in C([0,1], E_n) \oplus F_n \text {: }f(t) = \beta _t(a) \ \mathrm{for} \ t = 0,1 \right\} \), \(E_n = M_{p_n} \otimes M_{q_n}\), \(F_n = M_{p_n} \oplus M_{q_n}\), \(\beta _0: \, M_{p_n} \oplus M_{q_n} \rightarrow M_{p_n} \otimes M_{q_n}, \, (x,y) \mapsto x \otimes 1_{q_n}\), \(\beta _1: \, M_{p_n} \oplus M_{q_n} \rightarrow M_{p_n} \otimes M_{q_n}, \, (x,y) \mapsto 1_{p_n} \otimes y\).
To describe \(\varphi _n\) for fixed n, let \(d_0 \mathrel {:=}\frac{p_{n+1}}{p_n}\), \(d_1 \mathrel {:=}\frac{q_{n+1}}{q_n}\), \(d \mathrel {:=}d_0 \cdot d_1\), and write \(d = l_0 q_{n+1} + r_0\) with \(0 \le r_0 < q_{n+1}\), \(d = l_1 p_{n+1} + r_1\) with \(0 \le r_1 < p_{n+1}\). Note that we must have \(d_1 \mid r_0\) and \(d_0 \mid r_1\). Then
$$\begin{aligned} \varphi _n(f)= & {} u_{n+1}^* \cdot (f \circ \lambda _y)_{y \in {\mathcal {Y}}(n)} \cdot u_{n+1}, \\ \ \ \ \mathrm{where} \ {\mathcal {Y}}(n)= & {} \left\{ 1, \cdots , d \right\} \ \mathrm{and} \ \lambda _y(t) = {\left\{ \begin{array}{ll} \frac{t}{2} &{} \mathrm{if} \ 1 \le y \le r_0,\\ \frac{1}{2} &{} \mathrm{if} \ r_0< y \le d - r_1,\\ \frac{t+1}{2} &{} \mathrm{if} \ d - r_1 < y \le d. \end{array}\right. } \end{aligned}$$
Here we think of \(A_n\) as a subalgebra of \(C([0,1],E_n)\) via the embedding \(A_n \hookrightarrow C([0,1],E_n), \, (f,a) \mapsto f\).
To construct groupoid models for building blocks and connecting maps, start with a set \({\mathcal {X}}(1)\) with \(p_1 \cdot q_1\) elements, and define recursively \({\mathcal {X}}(n+1) \mathrel {:=}{\mathcal {X}}(n) \times {\mathcal {Y}}(n)\). Let \({\mathcal {R}}(n)\) be the full equivalence relation on \({\mathcal {X}}(n)\). Let \({\mathcal {R}}(n,p)\) and \({\mathcal {R}}(n,q)\) be the full equivalence relations on finite sets \({\mathcal {X}}(n,p)\) and \({\mathcal {X}}(n,q)\) with \(p_n\) and \(q_n\) elements. For \(t=0,1\), let \(\rho _{n+1,t}\) be the bijections corresponding to conjugation by \(u_{n+1}(t)\), which induce \(\sigma _{n,t}: \, {\mathcal {R}}(n,p) \times {\mathcal {R}}(n,q) \cong {\mathcal {R}}(n)\) corresponding to conjugation by \(v_n(t)\) introduced in Remark 4.1. Now set
$$\begin{aligned} \check{G}_n&\mathrel {:=} \left\{ (t,\gamma ) \in [0,1] \times {\mathcal {R}}(n) \text {: }\gamma \right\} \in \sigma _{n,0}({\mathcal {R}}(n,p)\\&\times {\mathcal {X}}(n,q)) \ \mathrm{if} \ t=0, \, \gamma \in \sigma _{n,1}({\mathcal {X}}(n,p) \times {\mathcal {R}}(n,q)) \ \mathrm{if} \ t=1,\\ G_n&\mathrel {:=} \check{G}_n / { }_{\sim } \ \ \ \mathrm{where} \ \sim \ \mathrm{is} \ \mathrm{given} \ \mathrm{by} \ (0,\sigma _{n,0}(\gamma ,y)) \\&\sim (0,\sigma _{n,0}(\gamma ,y')) \ \mathrm{and} \ (1,\sigma _{n,1}(x,\eta )) \sim (1,\sigma _{n,1}(x',\eta )). \end{aligned}$$
Define \(\check{p}_n: \, \check{H}_n \rightarrow \check{G}_n\) as the restriction of \({\mathcal {P}}_n: \, {\dot{{\mathcal {T}}}}_n \mathrel {:=}[0,1] \times {\mathcal {R}}(n) \times {\mathcal {Y}}(n) \rightarrow [0,1] \times {\mathcal {R}}(n), \, (t,\gamma ,y) \mapsto (\lambda _y(t),\gamma )\) to \(\check{H}_n \mathrel {:=}{\mathcal {P}}_n^{-1}(\check{G}_n)\). Set \(H_n \mathrel {:=}\check{H}_n / { }_{\sim }\) where \(\sim \) is the equivalence relation defining \(G_{n+1} = \check{G}_{n+1} / { }_{\sim }\). The map \(\check{p}_n\) descends to \(p_n: \, H_n \rightarrow G_n\). The groupoid G with \({\mathcal {Z}}\cong C^*_r(G)\) is now given by (23) and (24). As explained in Remark 7.1, its unit space \(X \mathrel {:=}G^{(0)}\) is given by \(X \cong \varinjlim \left\{ X_n; p_n \right\} \), where \(X_n = G_n^{(0)}\).
To further describe X, let \(\varvec{p}_n\) be the set-valued function on [0, 1] defined by \(\varvec{p}_n(s) \mathrel {:=}\left\{ \lambda _y(s) \text {: }y \in {\mathcal {Y}}(n) \right\} \). We can form the inverse limit
$$\begin{aligned} \varvec{X} \mathrel {:=}\varprojlim \left\{ [0,1]; \varvec{p}_n \right\} \mathrel {:=}\Big \{ (s_n) \in \prod _{n=1}^{\infty } [0,1]: \, s_n \in \varvec{p}_n(s_{n+1}) \Big \}. \end{aligned}$$
as in [21, § 2.2]. It is easy to see that \(X_n \mapsto [0,1], \, [(t,x)] \mapsto t\) gives rise to a continuous surjection \(X \twoheadrightarrow \varvec{X}\) whose fibres are all homeomorphic to the Cantor space. Moreover, \(\varvec{X}\) is connected and locally path connected. The space X itself is also connected. This follows easily from the construction itself (basically from \(\gcd (p_n,q_n) = 1\)) and also from abstract reasons because \({\mathcal {Z}}\) is unital projectionless. In addition, it is straightforward to check that for particular choices for \(\rho _{n,t}\) and hence \(\sigma _{n,t}\), our space X becomes locally path connected as well. In that case, it is a one-dimensional Peano continuum.
Every \(X_n\) is homotopy equivalent to a finite bouquet of circles. It is then easy to compute K-theory and Čech (co)homology:
$$\begin{aligned}&K_0(C(X)) = {\mathbb {Z}}[1], \ \ \ K_1(C(X)) \cong \bigoplus _{i=1}^{\infty } {\mathbb {Z}};\end{aligned}$$
$$\begin{aligned}&\check{H}^{\bullet }(X) \cong {\left\{ \begin{array}{ll} {\mathbb {Z}}&{} \mathrm{for} \ \bullet = 0,\\ \bigoplus _{i=1}^{\infty } {\mathbb {Z}}&{} \mathrm{for} \ \bullet = 1,\\ \left\{ 0 \right\} &{} \mathrm{for} \ \bullet \ge 2,\\ \end{array}\right. } \ \ \ \mathrm{and} \ \ \ \check{H}_{\bullet }(X) \cong {\left\{ \begin{array}{ll} {\mathbb {Z}}&{} \mathrm{for} \ \bullet = 0,\\ \prod _{i=1}^{\infty } {\mathbb {Z}}&{} \mathrm{for} \ \bullet = 1,\\ \left\{ 0 \right\} &{} \mathrm{for} \ \bullet \ge 2.\\ \end{array}\right. }\nonumber \\ \end{aligned}$$
It follows that for choices of \(\rho _{n,t}\) and \(\sigma _{n,t}\) such that X is locally path connected, X must be shape equivalent to the Hawaiian earring by [7]. In particular, its first Čech homotopy group is isomorphic to the one of the Hawaiian earring, which is the canonical projective limit of non-abelian free groups of finite rank. Moreover, by [9], the singular homology \(H_1(X)\) coincides with the singular homology of the Hawaiian earring, which is described in [10]. We refer the reader to [33] for more information about shape theory, which is the natural framework to study our space since it is constructed as an inverse limit.
Now we turn to \({\mathcal {W}}\). Recall the construction in [22]. For every \(n \in {\mathbb {N}}\), choose integers \(a_n, b_n \ge 1\) with \(a_{n+1} = 2 a_n + 1\), \(b_{n+1} = a_{n+1} \cdot b_n\). Then \({\mathcal {W}}= \varinjlim \left\{ A_n; \varphi _n \right\} \), where \(A_n = \left\{ (f,a) \in C([0,1], E_n) \oplus F_n \text {: }f \right\} (t) = \beta _t(a) \ \mathrm{for} t = 0,1\), \(E_n = M_{(a_n + 1) \cdot b_n}\), \(F_n = M_{b_n}\), with
$$\begin{aligned}&\beta _0: \, M_{b_n} \rightarrow M_{(a_n + 1) \cdot b_n}, \, x \mapsto \begin{pmatrix} x &{} &{} &{} \\ &{} \ddots &{} &{} \\ &{} &{} x &{} \\ &{} &{} &{} 0 \end{pmatrix}\\&\mathrm{and} \ \ \beta _1: \, M_{b_n} \rightarrow M_{(a_n + 1) \cdot b_n}, \, x \mapsto \begin{pmatrix} x &{} &{} \\ &{} \ddots &{} \\ &{} &{} x \end{pmatrix}, \end{aligned}$$
where we put \(a_n\) copies of x on the diagonal for \(\beta _0\), and \(a_n + 1\) copies of x on the diagonal for \(\beta _1\). To describe \(\varphi _n\) for fixed n, let \(d \mathrel {:=}2 a_{n+1}\). Then
$$\begin{aligned}&\varphi _n(f) = u_{n+1}^* \cdot (f \circ \lambda _y)_{y \in {\mathcal {Y}}(n)} \cdot u_{n+1}, \ \ \ \\&\mathrm{where} \ {\mathcal {Y}}(n) = \left\{ 1, \cdots , d \right\} \ \mathrm{and} \ \lambda _y(t) = {\left\{ \begin{array}{ll} \frac{t}{2} &{} \mathrm{if} \ 1 \le y \le a_{n+1},\\ \frac{1}{2} &{} \mathrm{if} \ y = a_{n+1} + 1,\\ \frac{t+1}{2} &{} \mathrm{if} \ a_{n+1} + 1 < y \le d. \end{array}\right. } \end{aligned}$$
To construct groupoid models, start with a set \({\mathcal {X}}(1)\) with \((a_1 + 1) \cdot b_1\) elements, and define recursively \({\mathcal {X}}(n+1) \mathrel {:=}{\mathcal {X}}(n) \times {\mathcal {Y}}(n)\). Let \({\mathcal {R}}(n)\) be the full equivalence relation on \({\mathcal {X}}(n)\). Let \({\mathcal {R}}(n,a)\) and \({\mathcal {R}}(n,b)\) be the full equivalence relations on finite sets \({\mathcal {X}}(n,a)\) and \({\mathcal {X}}(n,b)\) with \(a_n + 1\) and \(b_n\) elements, and let \({\mathcal {X}}'(n,a) \subseteq {\mathcal {X}}(n,a)\) be a subset with \(a_n\) elements (corresponding to the multiplicity of \(\beta _0\)). For \(t=0,1\), let \(\rho _{n+1,t}\) be the bijections corresponding to conjugation by \(u_{n+1}(t)\), which induce \(\sigma _{n,t}: \, {\mathcal {R}}(n,a) \times {\mathcal {R}}(n,b) \cong {\mathcal {R}}(n)\) corresponding to conjugation by \(v_n(t)\) introduced in Remark 4.1. Set
$$\begin{aligned}&\check{G}_n \mathrel {:=}\left\{ (t,\gamma ) \in [0,1] \times {\mathcal {R}}(n) \text {: }\gamma \right\} \in \sigma _{n,0}({\mathcal {X}}'(n,a) \times {\mathcal {R}}(n,b)) \ \mathrm{if} \ t=0, \, \\&\quad \gamma \in \sigma _{n,1}({\mathcal {X}}(n,a) \times {\mathcal {R}}(n,b)) \ \mathrm{if} \ t=1,\\&G_n \mathrel {:=}\check{G}_n / { }_{\sim } \ \ \ \mathrm{where} \ \sim \ \mathrm{is} \ \mathrm{given} \ \mathrm{by} \ (t,\sigma _{n,t}(x,\gamma )) \sim (t',\sigma _{n,t'}(x',\gamma )). \end{aligned}$$
Now define \(\check{p}_n: \, \check{H}_n \rightarrow \check{G}_n\) as the restriction of \({\mathcal {P}}_n: \, {\dot{{\mathcal {T}}}}_n \mathrel {:=}[0,1] \times {\mathcal {R}}(n) \times {\mathcal {Y}}(n) \rightarrow [0,1] \times {\mathcal {R}}(n), \, (t,\gamma ,y) \mapsto (\lambda _y(t),\gamma )\) to \(\check{H}_n \mathrel {:=}{\mathcal {P}}_n^{-1}(\check{G}_n)\). Set \(H_n \mathrel {:=}\check{H}_n / { }_{\sim }\) where \(\sim \) is the equivalence relation defining \(G_{n+1} = \check{G}_{n+1} / { }_{\sim }\). The map \(\check{p}_n\) descends to \(p_n: \, H_n \rightarrow G_n\). The groupoid G with \({\mathcal {W}}\cong C^*_r(G)\) is now given by (23) and (24). As explained in Remark 7.1, its unit space \(X \mathrel {:=}G^{(0)}\) is given by \(X \cong \varprojlim \left\{ X_n; p_n \right\} \), where \(X_n = G_n^{(0)}\). As in the case of \({\mathcal {Z}}\), X surjects continuously onto \(\varprojlim \left\{ {\mathbb {T}}; \varvec{p}_n \right\} \) with Cantor space fibres, where \({\mathbb {T}}= [0,1] / { }_{0 \sim 1}\) and \(\varvec{p}_n([s]) = \left\{ [\lambda _y(s)] \text {: }y \in {\mathcal {Y}}(n) \right\} \). However, it is easy to see that (at least for some choices of \(\rho _{n,t}\) and \(\sigma _{n,t}\)), X will not be connected, though its connected components all have to be non-compact.
Now let us treat \({\mathcal {Z}}_0\). For each \(m \in {\mathbb {N}}\), choose integers \(a_n, b_n, h_n \ge 1\) with \(a_{n+1} = ((2a_n + 2)h_n + 1) \cdot a_n\), \(b_{n+1} = ((2a_n + 2)h_n + 1) \cdot b_n\). Let \(A_n = \left\{ (f,a) \in C([0,1], E_n) \oplus F_n \text {: }f(t) = \beta _t(a) \ \mathrm{for} \ t = 0,1 \right\} \), with \(E_n = M_{(2a_n + 2) \cdot b_n}\), \(F_n = M_{b_n} \oplus M_{b_n}\),
$$\begin{aligned}&\beta _0: \, F_n \rightarrow E_n, \, (x,y) \mapsto \left( {\begin{matrix} x &{} &{} &{} &{} &{} &{} &{}\\ &{} \ddots &{} &{} &{} &{} &{} &{}\\ &{} &{} x &{} &{} &{} &{} &{}\\ &{} &{} &{} 0 &{} &{} &{} &{}\\ &{} &{} &{} &{} y &{} &{} &{}\\ &{} &{} &{} &{} &{} \ddots &{} &{}\\ &{} &{} &{} &{} &{} &{} y &{}\\ &{} &{} &{} &{} &{} &{} &{} 0 \end{matrix}} \right) , \\&\ \ \ \mathrm{and} \ \ \ \beta _1: \, F_n \rightarrow E_n, \, (x,y) \mapsto \left( {\begin{matrix} x &{} &{} &{} &{} &{}\\ &{} \ddots &{} &{} &{} &{}\\ &{} &{} x &{} &{} &{}\\ &{} &{} &{} y &{} &{}\\ &{} &{} &{} &{} \ddots &{}\\ &{} &{} &{} &{} &{} y\\ \end{matrix}} \right) , \end{aligned}$$
where we put \(a_n\) copies of x and y on the diagonal for \(\beta _0\), and \(a_n + 1\) copies of x and y on the diagonal for \(\beta _1\). To describe the connecting maps \(\varphi _n: \, A_n \rightarrow A_{n+1}\), fix n and let \(d \mathrel {:=}(2 a_{n+1} + 2)h_n + (2 a_n h_n + 1)\). Then \((2a_{n+1} + 2) \cdot b_{n+1} = d \cdot (2a_n + 2) \cdot b_n\). It is now easy to see that for suitable choices of unitaries \(u_{n+1}\), whose values at 0 and 1 are permutation matrices, we obtain a homomorphism \(\varphi _n: \, A_n \rightarrow A_{n+1}\) by setting
$$\begin{aligned} \varphi _n(f)&\mathrel {:=} u_{n+1}^* \cdot (f \circ \lambda _y)_{y \in {\mathcal {Y}}(n)} \cdot u_{n+1}, \ \mathrm{for} \ {\mathcal {Y}}(n) = \left\{ 1, \cdots , d \right\} , \, \lambda _y(t) \\= & {} {\left\{ \begin{array}{ll} \frac{t}{2} &{} \mathrm{if} \ 1 \le y \le 2 a_k h_k + 2h_k + 1,\\ \frac{1}{2} &{} \mathrm{if} \ 2 a_k h_k + 2h_k + 1< y \le (2a_{k+1} + 2) h_k,\\ \frac{t+1}{2} &{} \mathrm{if} \ (2a_{k+1} + 2) h_k < y \le d. \end{array}\right. } \end{aligned}$$
As above, we think of \(A_n\) as a subalgebra of \(C([0,1],E_n)\) via \(A_n \hookrightarrow C([0,1],E_n), \, (f,a) \mapsto f\). Now arguments similar to those in [22, 23] show that \(\varinjlim \left\{ A_n; \varphi _n \right\} \) has the same Elliott invariant as \({\mathcal {Z}}_0\), so that \({\mathcal {Z}}_0 \cong \varinjlim \left\{ A_n; \varphi _n \right\} \) by [37, Corollary 6.2.4] (see also [19, Theorem 12.2]).
To construct groupoid models, start with a set \({\mathcal {X}}(1)\) with \((2a_1 + 2) \cdot b_1\) elements, and define recursively \({\mathcal {X}}(n+1) \mathrel {:=}{\mathcal {X}}(n) \times {\mathcal {Y}}(n)\). Let \({\mathcal {R}}(n)\) be the full equivalence relation on \({\mathcal {X}}(n)\). Let \({\mathcal {R}}(n,a,1)\), \({\mathcal {R}}(n,a,2)\), \({\mathcal {R}}(n,b,1)\) and \({\mathcal {R}}(n,b,2)\) be full equivalence relations on finite sets \({\mathcal {X}}(n,a,1)\), \({\mathcal {X}}(n,a,2)\), \({\mathcal {X}}(n,b,1)\) and \({\mathcal {X}}(n,b,2)\) with \(a_n + 1\), \(a_n + 1\), \(b_n\) and \(b_n\) elements, respectively. Let \({\mathcal {X}}_0(n,a,1) \subseteq {\mathcal {X}}(n,a,1)\) and \({\mathcal {X}}_0(n,a,2) \subseteq {\mathcal {X}}(n,a,2)\) be subsets with \(a_n\) elements (corresponding to the multiplicities of \(\beta _0\)), and set \({\mathcal {X}}_1(n,a,\bullet ) \mathrel {:=}{\mathcal {X}}(n,a,\bullet )\). For \(t=0,1\), let \(\rho _{n+1,t}\) be the bijections corresponding to conjugation by \(u_{n+1}(t)\), which induce \(\sigma _{n,t}: \, {\mathcal {R}}(n,a,1) \times {\mathcal {R}}(n,b,1) \amalg {\mathcal {R}}(n,a,2) \times {\mathcal {R}}(n,b,2) \cong {\mathcal {R}}(n)\) corresponding to conjugation by \(v_n(t)\) introduced in Remark 4.1. Set
$$\begin{aligned}&\check{G}_n \mathrel {:=}\left\{ (t,\gamma ) \in [0,1] \times {\mathcal {R}}(n) \text {: }\gamma \right\} \in \sigma _{n,t}({\mathcal {X}}_t(n,a,1)\\&\quad \times {\mathcal {R}}(n,b,1) \amalg {\mathcal {X}}_t(n,a,2) {\mathcal {R}}(n,b,2)) \ \mathrm{if} \ t \in \left\{ 0,1 \right\} ,\\&G_n \mathrel {:=}\check{G}_n / { }_{\sim } \ \ \ \mathrm{where} \ \sim \ \mathrm{is} \ \mathrm{given} \ \mathrm{by} \ (t,\sigma _{n,t}(x,\gamma )) \sim (t',\sigma _{n,t'}(x',\gamma )). \end{aligned}$$
Now define \({\check{p}_n}: \, \check{H}_n \rightarrow \check{G}_n\) as the restriction of \({{\mathcal {P}}_n}: \, {\dot{{\mathcal {T}}}}_n \mathrel {:=}[0,1] \times {\mathcal {R}}(n) \times {\mathcal {Y}}(n) \rightarrow [0,1] \times {\mathcal {R}}(n), \, (t,\gamma ,y) \mapsto (\lambda _y(t),\gamma )\) to \(\check{H}_n \mathrel {:=}{\mathcal {P}}_n^{-1}(\check{G}_n)\). Set \(H_n \mathrel {:=}\check{H}_n / { }_{\sim }\) where \(\sim \) is the equivalence relation defining \(G_{n+1} = \check{G}_{n+1} / { }_{\sim }\). The map \(\check{p}_n\) descends to \(p_n: \, H_n \rightarrow G_n\). The groupoid G with \({\mathcal {Z}}_0 \cong C^*_r(G)\) is now given by (23) and (24). As explained in Remark 7.1, its unit space \(X \mathrel {:=}G^{(0)}\) is given by \(X \cong \varprojlim \left\{ X_n; p_n \right\} \), where \(X_n = G_n^{(0)}\). As for \({\mathcal {W}}\), X surjects continuously onto \(\varprojlim \left\{ {\mathbb {T}}; \varvec{p}_n \right\} \) with Cantor space fibres, where \({\mathbb {T}}= [0,1] / { }_{0 \sim 1}\) and \(\varvec{p}_n([s]) = \left\{ [\lambda _y(s)] \text {: }y \in {\mathcal {Y}}(n) \right\} \). However, it is easy to see that (at least for some choices of \(\rho _{n,t}\) and \(\sigma _{n,t}\)), X will not be connected, though its connected components all have to be non-compact.
Antoine, R., Perera, F., Santiago, L.: Pullbacks, \(C(X)\)-algebras, and their Cuntz semigroup. J. Funct. Anal. 260(10), 2844–2880 (2011)MathSciNetzbMATHCrossRefGoogle Scholar
Austin, K., Mitra, A.: Groupoid models for the Jiang–Su and Razak–Jacelon algebras: an inverse limit approach. arXiv:1804.00967 (preprint)
Barlak, S., Li, X.: Cartan subalgebras and the UCT problem. Adv. Math. 316, 748–769 (2017)MathSciNetzbMATHCrossRefGoogle Scholar
Barlak, S., Li, X.: Cartan subalgebras and the UCT problem, II. arXiv:1704.04939 (preprint)
Barlak, S., Szabo, G.: Problem sessions. In: Mini-workshop: MASAs and automorphisms of C*-algebras. Oberwolfach Rep. vol. 14, no. 3, pp. 2601–2629 (2017)Google Scholar
Donsig, A., Pitts, D.R.: Coordinate systems and bounded isomorphisms. J. Oper. Theory 59(2), 359–416 (2008)MathSciNetzbMATHGoogle Scholar
Daverman, R.J., Venema, G.A.: CE equivalence and shape equivalence of 1-dimensional compacta. Topol. Appl. 26(2), 131–142 (1987)MathSciNetzbMATHCrossRefGoogle Scholar
Deeley, R.J., Putnam, I.F., Strung, K.R.: Constructing minimal homeomorphisms on point-like spaces and a dynamical presentation of the Jiang–Su algebra. J. Reine Angew. Math. 742, 241–261 (2018)MathSciNetzbMATHCrossRefGoogle Scholar
Eda, K.: Singular homology groups of one-dimensional Peano continua. Fund. Math. 232(2), 99–115 (2016)MathSciNetzbMATHCrossRefGoogle Scholar
Eda, K., Kawamura, K.: The singular homology of the Hawaiian earring. J. Lond. Math. Soc. (2) 62(1), 305–310 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
Elliott, G.A.: An invariant for simple C*-algebras, Canadian Mathematical Society, 1945–1995, vol. 3, pp. 61–90. Canadian Mathematical Society, Ottawa, ON (1996)Google Scholar
Elliott, G.A., Gong, G., Lin, H., Niu, Z.: On the classification of simple amenable \(C^*\)-algebras with finite decomposition rank, II. arXiv:1507.03437v3 (preprint)
Elliott, G.A., Gong, G., Lin, H., Niu, Z.: Simple stably projectionless \(C^*\)-algebras with generalized tracial rank one. arXiv:1711.01240v5 (preprint)
Elliott, G.A., Gong, G., Lin, H., Niu, Z.: The classification of simple separable KK-contractible \(C^*\)-algebras with finite nuclear dimension. arXiv:1712.09463 (preprint)
Elliott, G.A., Niu, Z.: The classification of simple separable KK-contractible C*-algebras with finite nuclear dimension. arXiv:1611.05159 (preprint)
Elliott, G.A., Villadsen, J.: Perforated ordered \(K_0\)-groups. Canad. J. Math. 52(6), 1164–1191 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
Fedorchuk, V.V.: The Fundamentals of Dimension Theory. In: Arkhangel'skii, A.V., Pontryagin, L.S. (eds.) Encyclopaedia of Mathematical Sciences, General Topology I, vol. 17. Springer, Berlin (1993)Google Scholar
Gong, G., Lin, H.: On classification of non-unital simple amenable C*-algebras, I. arXiv:1611.04440v3 (preprint)
Gong, G., Lin, H.: On classification of simple non-unital amenable C*-algebras, II. arXiv:1702.01073v2 (preprint)
Gong, G., Lin, H., Niu, Z.: Classification of finite simple amenable \({{\cal{Z}}}\)-stable C*-algebras. arXiv:1501.00135v6 (preprint)
Ingram, W.T., Mahavier, W.S.: Inverse Limits. From Continua to Chaos, Developments in Mathematics, vol. 25. Springer, New York (2012)zbMATHGoogle Scholar
Jacelon, B.: A simple, monotracial, stably projectionless C*-algebra. J. Lond. Math. Soc. (2) 87(2), 365–383 (2013)MathSciNetzbMATHCrossRefGoogle Scholar
Jiang, X., Su, H.: On a simple unital projectionless C*-algebra. Am. J. Math. 121(2), 359–413 (1999)MathSciNetzbMATHCrossRefGoogle Scholar
Kirchberg, E., Phillips, N.C.: Embedding of exact \(C^*\)-algebras in the Cuntz algebra \({{\cal{O}}}_2\). J. Reine Angew. Math. 525, 17–53 (2000)MathSciNetzbMATHCrossRefGoogle Scholar
Kumjian, A.: On \(C^*\)-diagonals. Canad. J. Math. 38(4), 969–1008 (1986)MathSciNetzbMATHCrossRefGoogle Scholar
Li, X.: Continuous orbit equivalence rigidity. Ergod. Theor. Dyn. Syst. 38, 1543–1563 (2018)MathSciNetzbMATHCrossRefGoogle Scholar
Li, X.: Partial transformation groupoids attached to graphs and semigroups. Int. Math. Res. Not. 2017, 5233–5259 (2017)MathSciNetzbMATHCrossRefGoogle Scholar
Li, X.: Dynamic characterizations of quasi-isometry, and applications to cohomology. Algebr. Geom. Topol. 18(6), 3477–3535 (2018)MathSciNetzbMATHCrossRefGoogle Scholar
Li, X., Renault, J.: Cartan subalgebras in C*-algebras. Existence and uniqueness. Trans. Am. Math. Soc. 372(3), 1985–2010 (2019)MathSciNetzbMATHCrossRefGoogle Scholar
Lin, H.: Simple \(C^*\)-algebras with continuous scales and simple corona algebras. Proc. Am. Math. Soc. 112(3), 871–880 (1991)MathSciNetzbMATHGoogle Scholar
Lin, H.: Simple corona \(C^*\)-algebras. Proc. Am. Math. Soc. 132(11), 3215–3224 (2004)MathSciNetzbMATHCrossRefGoogle Scholar
Lin, H.: From the basic homotopy lemma to the classification of \(C^*\)-algebras, CBMS Regional Conference Series in Mathematics, 124, Published for the Conference Board of the Mathematical Sciences, Washington, DC. American Mathematical Society, Providence, RI (2017)Google Scholar
Mardešić, S., Segal, J.: Shape Theory. The Inverse System Approach, North-Holland Mathematical Library, vol. 26. North-Holland Publishing Co., Amsterdam (1982)zbMATHGoogle Scholar
Phillips, N.C.: A classification theorem for nuclear purely infinite simple \(C^*\)-algebras. Doc. Math. 5, 49–114 (2000)MathSciNetzbMATHGoogle Scholar
Putnam, I.F.: Some classifiable groupoid \(C^*\)-algebras with prescribed K-theory. Math. Ann. 370(3–4), 1361–1387 (2018)MathSciNetzbMATHCrossRefGoogle Scholar
Renault, J.: Cartan subalgebras in C*-algebras. Irish Math. Soc. Bull. 61, 29–63 (2008)MathSciNetzbMATHGoogle Scholar
Robert, L.: Classification of inductive limits of 1-dimensional NCCW complexes. Adv. Math. 231(5), 2802–2836 (2012)MathSciNetzbMATHCrossRefGoogle Scholar
Rørdam, M.: Classification of nuclear, simple \(C^*\)-algebras. In: Classification of nuclear \(C^*\)-algebras. Entropy in Operator Algebras, pp. 1–145, Encyclopaedia Math. Sci., 126, Oper. Alg. Non-commut. Geom. 7. Springer, Berlin (2002)Google Scholar
Rosenberg, J.: Algebraic \(K\)-Theory and Its Applications. Graduate Texts in Mathematics, 147. Springer, New York (1994)zbMATHCrossRefGoogle Scholar
tom Dieck, T.: Algebraic Topology, EMS Textbooks in Mathematics. European Mathematical Society, Zürich (2008)CrossRefGoogle Scholar
Spielberg, J.: Graph-based models for Kirchberg algebras. J. Oper. Theory 57(2), 347–374 (2007)MathSciNetzbMATHGoogle Scholar
Thiel, H.: A list of open problems and goals recorded during the workshop . Future Targets in the Classification Program for Amenable C*-algebras. BIRS, Banff. Retrieved from https://www.birs.ca/workshops/2017/17w5127/files/FutureTargets-ProblemList.pdf (2017)
Thomsen, K.: On the ordered \(K_0\) group of a simple \(C^*\)-algebra. K-Theory 14(1), 79–99 (1998)MathSciNetzbMATHCrossRefGoogle Scholar
Tikuisis, A., White, S., Winter, W.: Quasidiagonality of nuclear \(C^*\)-algebras. Ann. Math. (2) 185(1), 229–284 (2017)MathSciNetzbMATHCrossRefGoogle Scholar
© The Author(s) 2019
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.School of Mathematical SciencesQueen Mary University of LondonLondonUK
Li, X. Invent. math. (2019). https://doi.org/10.1007/s00222-019-00914-0
Received 30 June 2018
First Online 16 August 2019
|
CommonCrawl
|
\begin{document}
\title{Normalized solutions for a Schr\"{o}dinger equation with critical growth in $\mathbb{R}^{N}$} \author{ Claudianor O. Alves,\footnote{C.O. Alves was partially supported by CNPq/Brazil grant 304804/2017-7.} \,\, Chao Ji\footnote{Corresponding author} \footnote{C. Ji was partially supported by Natural Science Foundation of Shanghai(20ZR1413900,18ZR1409100).} \,\, and \,\, Olimpio H. Miyagaki\footnote{O. H. Miyagaki was supported by FAPESP/Brazil grant 2019/24901-3 and CNPq/Brazil grant 307061/2018-3.}}
\maketitle
\begin{abstract} In this paper we study the existence of normalized solutions to the following nonlinear Schr\"{o}dinger equation with critical growth \begin{align*}
\left\{ \begin{aligned} &-\Delta u=\lambda u+f(u), \quad \quad \hbox{in }\mathbb{R}^N,\\
&\int_{\mathbb{R}^{N}}|u|^{2}dx=a^{2}, \end{aligned} \right. \end{align*}
where $a>0$, $\lambda\in \mathbb{R}$ and $f$ has an exponential critical growth when $N=2$, and $f(t)=\mu |u|^{q-2}u+|u|^{2^*-2}u$ with $q \in (2+\frac{4}{N},2^*)$, $\mu>0$ and $2^*=\frac{2N}{N-2}$ when $N \geq 3$. Our main results complement some recent results for $N \geq 3$ and it is totally new for $N=2$. \end{abstract}
{\small \textbf{2010 Mathematics Subject Classification:} 35A15, 35J10, 35B09, 35B33.}
{\small \textbf{Keywords:} Normalized solutions, Nonlinear Schr\"odinger equation, Variational methods, Critical exponents.}
\section{Introduction}
This paper concerns the existence of normalized solutions to the following nonlinear Schr\"{o}dinger equation with critical growth \begin{align}\label{11}
\left\{ \begin{aligned}
&-\Delta u=\lambda u+f(u), \quad
\quad
\hbox{in }\mathbb{R}^N,\\
&\int_{\mathbb{R}^{N}}|u|^{2}dx=a^{2}, \end{aligned} \right. \end{align}
where $a>0$, $\lambda\in \mathbb{R}$ and $f$ has an exponential critical growth when $N=2$, and $f(t)=\mu |u|^{q-2}u+|u|^{2^*-2}u$ with $q \in (2+\frac{4}{N},2^*)$, $\mu>0$ and $2^*=\frac{2N}{N-2}$ when $N \geq 3$.\\
The equation \eqref{11} arises when ones look for the solutions with prescribed mass for the nonlinear Schr\"{o}dinger equation \begin{equation*}
i\frac{\partial \psi}{\partial t}+\triangle \psi+h(|\psi|^{2})\psi=0 \quad \hbox{in }\mathbb{R}^N. \end{equation*}
A stationary wave solution is a solution of the form $\psi(t, x)=e^{-i\lambda t}u(x)$, where $\lambda\in \mathbb{R}$ and $u:\mathbb{R}^N\rightarrow \mathbb{R}$ is a time-independent that must solve the elliptic problem \begin{equation} \label{g00} -\Delta u=\lambda u+g(u), \quad \mbox{in} \quad \mathbb{R}^N, \end{equation}
where $g(u)=h(|u|^2)u$. For some values of $\lambda$ the existence of nontrivial solutions for (\ref{g00}) are obtained as the critical points of the action functional $J_{\lambda}:H^{1}(\mathbb{R}^N) \to \mathbb{R}$ given by $$
J_\lambda(u)=\frac{1}{2}\int_{\mathbb{R}^N}(|\nabla u|^2-\lambda |u|^2)\,dx-\int_{\mathbb{R}^N}G(u)\,dx, $$ where $G(t)=\int_{0}^{t}g(s)\,ds$. In this case the particular attention is devoted to {\it the least action solutions}, namely, the minimizing solutions of $J_\lambda$ among all non-trivial solutions.
Another important way to find the nontrivial solutions for (\ref{g00}) is to search for solutions with {\it prescribed mass}, and in this case $\lambda \in \mathbb{R}$ is part of the unknown. This approach seems to be particularly meaningful from the physical point of view, because there is a conservation of mass.
The present paper has been motivated by a seminal paper due to Jeanjean \cite{jeanjean1} that studied the existence of normalized solutions for a large class of Schr\"{o}dinger equations of the type \begin{align}\label{pjeanjean}
\left\{
\begin{aligned}
&-\Delta u=\lambda u+g(u), \quad
\quad
\hbox{in }\mathbb{R}^N,\\
& \int_{\mathbb{R}^{N}}|u|^{2}dx=a^{2},
\end{aligned}
\right. \end{align} with $N \geq 2$, where function $g:\mathbb{R} \to \mathbb{R}$ is an odd continuous function with subcritical growth that satisfies some technical conditions. One of these conditions is the following: $\exists (\alpha,\beta) \in \mathbb{R} \times \mathbb{R}$ satisfying $$ \left\{ \begin{array}{l} \frac{2N+4}{N}< \alpha \leq \beta < \frac{2N}{N-2}, \quad \mbox{for}\,\, N \geq 3, \\ \mbox{}\\ \frac{2N+4}{N}< \alpha \leq \beta, \quad \mbox{for} \,\, N=1,2, \end{array} \right. $$ such that $$ \alpha G(s) \leq g(s)s \leq \beta G(s) \quad \mbox{with} \quad G(s)=\int_{0}^{s}g(t)\,dt. $$
As an example of a function $g$ that satisfies the above condition is $g(s)=|s|^{q-2}s$ with $q \in (2+\frac{4}{N},2^*)$ when $N\geq 3$ and $q> 4$ if $N=2$. In order to overcome the loss of compactness of the Sobolev embedding in whole $\mathbb{R}^N$, the author worked on the space $H_{rad}^{1}(\mathbb{R}^N)$ to get some compactness results. However the most important and interesting point, in our opinion, is the fact that Jeanjean did not work directly with the energy functional $I:H^{1}(\mathbb{R}^N) \to \mathbb{R}$ associated with the problem \eqref{pjeanjean} given by
$$
I(u)=\frac{1}{2}\int_{\mathbb{R}^{N}} |\nabla u|^2 dx-\int_{\mathbb{R}^{N}} G(u)dx. $$ In his approach, he considered the functional $\widetilde{I}:H^{1}(\mathbb{R}^N) \times \mathbb{R}\to \mathbb{R}$ given by $$
\widetilde{I}(u,s)=\frac{e^{2s}}{2}\int_{\mathbb{R}^{N}} |\nabla u|^2 dx-\frac{1}{e^{Ns}}\int_{\mathbb{R}^{N}} G(e^{\frac{Ns}{2}}u(x))\,dx. $$ After a careful analysis, it was proved that $I$ and $\widetilde{I}$ satisfy the mountain pass geometry on the manifold $$
S(a)=\{u \in H^{1,2}(\mathbb{R}^N)\,:\, | u |_2=a\, \}, $$ and their mountain pass levels are equal, which we denote by $\gamma(a)$. Moreover, using the properties of $\widetilde{I}(u,s)$, it was obtained a $(PS)$ sequence $(u_n)$ to $I$ associated with the mountain pass level $\gamma(a)$ which is bounded in $H_{rad}^{1}(\mathbb{R}^N)$. Finally, after some estimates, the author was able to prove that the weak limit of $(u_n)$, denoted by $u$, is nontrivial, $u \in S(a)$ and it verifies $$
-\Delta u-g(u)=\lambda_au \quad \mbox{in} \quad \mathbb{R}^N, $$ for some $\lambda_a<0$. An example of a nonlinearity explored in \cite{jeanjean1} we cite $$
g(t)=\mu |t|^{q-2}t\quad t \in \mathbb{R}, \leqno{(g_0)} $$ where $\mu>0$ and $q \in (2+\frac{4}{N},2^*)$. We recall that the study of the normalized problem despite being more convenient in the application, this brings some difficulties such as Nehari manifold method can not be applied because the constant $\lambda_a$ is unknown in the problem; it is necessary to prove that the weak limit belongs to the constrained manifold; and also it brings some difficult to apply some usual approach for obtaining the boundedness of the Palais Smale sequence.
We recall that the number $\bar{q}:=2+\frac{4}{N}$ is called in the literature as $L^2$-critical exponent, which come from Gagliardo-Nirenberg inequality (see \cite[Theorem 1.3.7, page 9]{CazenaveLivro}. If $g$ is of the form $(g_0)$ with $q \in (2,\bar{q})$, we say that the problem is $L^2$-subcritical, while in the case $q \in (\bar{q},2^*)$ the problem is $L^2$-supercritical. Associated with the $L^2$-supercritical problem, we would like to cite \cite{Bartschmolle}, where the authors studied a problem involving vanishing potential. In the purely $L^2$-critical case, that is, $q =2+\frac{4}{N}$, related problems were studied in \cite{Cheng,Miao}.
In \cite{Nicola1}, Soave studied the normalized solutions for the nonlinear Schr\"{o}dinger equation (\ref{11}) with combined power nonlinearities of the type $$
f(t)=\mu |t|^{q-2}t+|t|^{p-2}t,\quad t \in \mathbb{R}, \leqno{(f_0)} $$ where $$ 2<q\leq 2+\frac{4}{N}\leq p<2^{*},\,\, p\neq q \quad \text{and}\,\, \mu\in \mathbb{R}. $$ He showed that interplay between subcritical, critical and supercritical nonlinearities strongly affects the geometry of the functional as well as the existence and properties of ground states.
Recently some authors have considered the problem (\ref{11}) with $f$ of the form $(f_0)$ but with $p=2^*$, which implies that $f$ has a critical growth in the Sobolev sense. In \cite{JeanjeanJendrejLeVisciglia}, the existence of a ground state normalized solution is obtained as minimizer of the constrained functional assuming that $q \in (2,2+\frac{4}{N})$. While in \cite{JeanjeanLe} a multiplicity result was established, where the second solution is not a ground state. For the general case $q \in (2,2^*)$, we would like to mention Soave \cite{Nicola2}, where the existence result was obtained by imposing that $\mu a^{(1-\gamma_p)q}<\alpha$, where $\alpha$ is a specific constant that depends on $N$ and $q$ and $\gamma_p=\frac{N(p-2)}{2p}$. We have seen that the results in the paper are deeply dependent on the assumptions about $a$ and $\mu$, because by the Pohozaev identidy the problem (\ref{11}) does not have any solution if $\lambda =- a \geq 0$ and $\mu >0.$ Still related to the case $q \in (2+\frac{4}{N},2^*)$, we would like to refer \cite{AkahoriIbrahimKikuchiNawa1,AkahoriIbrahimKikuchiNawa2} where the existence of least action solutions was proved with $\mu>0$ and $N\geq 4,$ and for $N=3$ by supposing a technical on the constants $\lambda$ and $\mu$.
We recall that elliptic problems involving critical Sobolev exponent were studied by many researchers after appeared the pioneering paper by Brezis and Nirenberg \cite{BN}, which have had many progresses in several directions. We would like to mention the excellent book \cite{Willem}, for a review on this subject. In our setting, since if $\lambda=0$, the problem (\ref{11}) does not have any solution for any $ \mu$, then taking $\lambda$ as a Lagrange multiplier, it is able to get solution, by combining the arguments made in \cite{BN} with the concentration compactness principle. For the reader interested in normalized solutions for the Schr\"{o}dinger equations, we would also like to refer \cite{BartschJeanjeanSaove}, \cite{BartschSaove}, \cite{BellazziniJeanjeanLuo}, \cite{CingolaniJeanjean}, \cite{Guo}, \cite{JeanjeanLu}, \cite{MEDERSKISCHINO}, \cite{BenedettaTavaresVerzini}, \cite{Stefanov}, \cite{TaoVisanZhang}, \cite{WangLi} and references therein.
Our main result for the Sobolev critical case is the following: \begin{theorem}\label{T1}
Assume that $f$ is of the form $(f_0)$ with $p=2^*$ and $q \in (2+\frac{4}{N},2^*)$. Then, there exists $\mu^*=\mu^*(a)>0$ such that the problem \eqref{11} admits a couple $(u_{a}, \lambda_{a})\in H^{1}(\mathbb{R}^N)\times \mathbb{R}$ of weak solutions such that $\int_{\mathbb{R}^{N}}|u|^{2}dx=a^{2}$ and $\lambda_{a}<0$ for all $\mu \geq \mu^*$. \end{theorem}
The above theorem complements the results found in \cite{Nicola2} for the $L^2$-supercritical case, because in that paper $\mu \in (0,a^{-(1-\gamma_p)q}\alpha)$ for some $\alpha>0$, then $\mu$ cannot be large enough, while in our paper $\mu$ can be large enough, because $\mu \in [\mu^*(a),+\infty)$. Here, we used a different approach from that explored in \cite{Nicola2}, because we work directly with the mountain pass geometry and concentration-compactness principle due to Lions \cite{Lions}, while in \cite{Nicola2}, Soave employed minimization technique and used the properties of the Pohozaev manifold.
Motivated by the research made in the critical Sobolev case, in this paper we also study the exponential critical growth for $N=2$, which is a novelty for this type of problem. To the best our knowledge we have not found any reference involving normalizing problem with the exponential critical growth. We recall that in $\mathbb{R}^2$, the natural growth restriction on the function $f$ is given by the inequality of Trudinger and Moser \cite{M,T}. More precisely, we say that a function $f$ has an exponential critical growth if there is $\alpha_0 >0$ such that
$$ \lim_{|s| \to \infty} \frac{|f(s)|}{e^{\alpha s^{2}}}=0 \,\,\, \forall\, \alpha > \alpha_{0}\quad \mbox{and} \quad
\lim_{|s| \to \infty} \frac{|f(s)|}{e^{\alpha s^{2}}}=+ \infty \,\,\, \forall\, \alpha < \alpha_{0}. $$ We would like to mention that problems involving exponential critical growth have received a special attention at last years, see for example, \cite{A, AdoOM, ASS1, Montenegro, Cao,DMR,DdOR, OS,doORuf} for semilinear elliptic equations, and \cite{1,AlvesGio,2,5} for quasilinear equations.
In this case, we assume that $f$ is a continuous function that satisfies the following conditions:
\begin{itemize}
\item[\rm ($f_1$)]$\displaystyle \lim_{t \to 0}\frac{|f(t)|}{|t|^{\tau}}=0$ as $t\rightarrow 0$,\, \mbox{for some}\, $\tau>3$;
\item[\rm ($f_2$)]$$
\lim_{|t|\rightarrow +\infty} \frac{|f(t)|}{e^{\alpha t^{2}}}
=
\begin{cases}
0,& \hbox{for } \alpha> 4\pi,\\
+\infty,& \hbox{for } 0<\alpha<4\pi;
\end{cases}
$$
\item[\rm ($f_3$)] there exists a positive constant $\theta>4$ such that
\begin{equation*}
0<\theta F(t)\leq tf(t), \, \, \forall\, t \not= 0,
\,
\hbox{where }
F(t)=\int_{0}^{t}f(s)ds;
\end{equation*}
\item[\rm ($f_4$)] there exist constants $p>4$ and $\mu>0$ such that
\begin{equation*}
sgn(t)f(t)\geq \mu \, |t|^{p-1}\quad \text{for all}\,\, t \not=0,
\end{equation*} where $sgn:\mathbb{R}\setminus \{0\} \to \mathbb{R}$ is given by $$ sgn(t)= \left\{ \begin{array}{l} 1, \quad \mbox{if} \quad t>0 \mbox{}\\ -1, \quad \mbox{if} \quad t<0. \end{array} \right. $$
\end{itemize}
Our main result is as follows: \begin{theorem}\label{T2}
Assume that $f$ satisfies $(f_1)-(f_4)$. If $a \in (0,1)$, then there exists $\mu^*=\mu^*(a)>0$ such that the problem \eqref{11} admits a couple $(u_{a}, \lambda_{a})\in H^{1}(\mathbb{R}^2)\times \mathbb{R}$ of weak solutions with $\int_{\mathbb{R}^{2}}|u|^{2}dx=a^{2}$ and $\lambda_{a}<0$ for all $\mu \geq \mu^*$. \end{theorem}
In the proof of Theorem \ref{T1} and Theorem \ref{T2} we borrow the ideas developed in Jeanjean \cite{jeanjean1}. The main difficulty in the proof of these theorems is associated with the fact that we are working with critical nonlinearities in whole $\mathbb{R}^N$. As above mentioned, in the proof of Theorem \ref{T1}, the concentration-compactness principle due to Lions \cite{Lions} is crucial in our arguments, while in the proof of Theorem \ref{T2}, the Trundiger-Moser inequality developed by Cao \cite{Cao} plays an important role in a lot of estimates. Moreover, in the proofs of these theorems we shall work on the space $H^{1}_{rad}(\mathbb{R}^N)$, because it has very nice compact embeedings. Moreover, by Palais' principle of symmetric criticality, see \cite{palais}, it is well known that the solutions in $H^{1}_{rad}(\mathbb{R}^N)$ are in fact solutions in whole $H^{1}(\mathbb{R}^N)$.
\noindent \textbf{Notation:} From now on in this paper, otherwise mentioned, we use the following notations: \begin{itemize}
\item $B_r(u)$ is an open ball centered at $u$ with radius $r>0$, $B_r=B_r(0)$.
\item $C,C_1,C_2,...$ denote any positive constant, whose value is not relevant.
\item $|\,\,\,|_p$ denotes the usual norm of the Lebesgue space $L^{p}(\mathbb{R}^N)$, for $p \in [1,+\infty]$,
$\Vert\,\,\,\Vert$ denotes the usual norm of the Sobolev space $H^{1}(\mathbb{R}^N)$.
\item $o_{n}(1)$ denotes a real sequence with $o_{n}(1)\to 0$ as $n \to +\infty$.
\end{itemize}
\section{Normalized solutions: The Sobolev critical case for $N\geq 3$}
In order to follow the same strategy in \cite{jeanjean1}, we need the following definitions to introduce our variational procedure.
\begin{itemize}
\item[\rm (1)] $S(a)=\{u \in H^{1}(\mathbb{R}^N)\,:\, | u |_2=a\, \}$ is the sphere of radius $a>0$ defined with the norm $|\,\,\,\,|_2$.
\item[\rm (2)] $J: H^{1}(\mathbb{R}^N)\rightarrow \mathbb{R}$ with
$$
J(u)=\frac{1}{2}\int_{\mathbb{R}^{N}} |\nabla u|^2 dx-\int_{\mathbb{R}^{N}} F(u)dx, $$ where $$
F(t)=\frac{\mu}{q}|t|^q+\frac{1}{2^*}|t|^{2^*}, \quad t \in \mathbb{R}. $$ Hereafter, $H=H^{1}(\mathbb{R}^N)\times \mathbb{R}$ is equipped with the scalar product $$ \langle\cdot, \cdot\rangle_{H}=\langle\cdot, \cdot\rangle_{H^{1}(\mathbb{R}^N)}+\langle\cdot, \cdot\rangle_{\mathbb{R}} $$ and corresponding norm $$ \Vert \cdot\Vert_{H}=(\Vert \cdot\Vert_{H}^{2}+ \vert \cdot\vert_{ \mathbb{R}}^{2})^{1/2}. $$
In this section, $f$ denotes the function $f(t)=\mu|t|^{q-2}t+|t|^{2^*-2}t$ with $t \in\mathbb{R}$, and so, $F(t)=\int_0^tf(s)\,ds$.
\item[\rm (3)]$\mathcal{H}: H\rightarrow H^{1}(\mathbb{R}^N)$ with
\begin{equation*}
\mathcal{H}(u, s)(x)=e^{\frac{Ns}{2}}u(e^{s}x). \end{equation*}
\item[\rm (4)] $\tilde{J}: H\rightarrow \mathbb{R}$ with
$$
\tilde{J}(u, s)=\frac{e^{2s}}{2}\int_{\mathbb{R}^{N}} |\nabla u|^2 dx-\frac{1}{e^{Ns}}\int_{\mathbb{R}^{N}} F(e^{\frac{Ns}{2}}u(x))\,dx $$ or $$
\tilde{J}(u, s)=\frac{1}{2}\int_{\mathbb{R}^{N}} |\nabla v|^2 dx-\int_{\mathbb{R}^{N}} F(v(x))\,dx=J(v)\quad \mbox{for}\ v=\mathcal{H}(u, s)(x). $$ \end{itemize}
Throughout this section, $S$ denotes the following constant \begin{equation} \label{Sp*}
S=
\inf_{ {\footnotesize{
\begin{array}{l}
u \in D^{1,2}(\mathbb{R}^N) \\
u \not=0
\end{array}}}}
\frac{ \int_{\mathbb{R}^N}|\nabla u|^{2}\,dx}{\left(\int_{\mathbb{R}^N}|u|^{2^*}\,dx\right)^{\frac{2}{2^*}}}, \end{equation} where $2^*=\frac{2N}{N-2}$ for $N\geq 3$, and $D^{1,2}(\mathbb{R}^N)$ is the Banach space given by $$
D^{1,2}(\mathbb{R}^N)=\left\{u \in L^{2^*}(\mathbb{R}^N)\;:\;|\nabla u|^{2} \in L^{2}(\mathbb{R}^N)\right\} $$ endowed with the norm $$
\|u\|_{D^{1,2}(\mathbb{R}^N)}=\left(\int_{\mathbb{R}^N}|\nabla u|^{2}\,dx\right)^{\frac{1}{2}}. $$ It is well known that the embedding $D^{1,2}(\mathbb{R}^N) \hookrightarrow L^{2^*}(\mathbb{R}^N)$ is continuous.
\subsection{The minimax approach}
We shall prove that $\tilde{J}$ on $S(a)\times \mathbb{R}$ possesses a kind of mountain-pass geometrical structure.
\begin{lemma}\label{vicentej} Let $u\in S(a)$ be arbitrary but fixed. Then we have:\\ \noindent (i) $\vert \nabla \mathcal{H}(u, s) \vert_{2}\rightarrow 0$ and $J(\mathcal{H}(u, s))\rightarrow 0$ as $s\rightarrow -\infty$;\\ \noindent (ii) $\vert \nabla \mathcal{H}(u, s) \vert_{2}\rightarrow +\infty$ and $J(\mathcal{H}(u, s))\rightarrow -\infty$ as $s\rightarrow +\infty$. \end{lemma}
\begin{proof} \mbox{} By a straightforward calculation, it follows that
\begin{equation} \label{CONV0}\int_{\mathbb{R}^N}\vert \mathcal{H}(u, s)(x)\vert^{2}\,dx=a^{2}, \quad \int_{\mathbb{R}^N}|\mathcal{H}(u, s)(x)|^{\xi}\,dx= e^{\frac{(\xi-2)Ns}{2}}\int_{\mathbb{R}^N}|u(x)|^{\xi}\,dx, \quad \forall \xi \geq 2, \end{equation} and \begin{equation} \label{CONVJ1*}
\int_{\mathbb{R}^N}\vert \nabla \mathcal{H}(u, s)(x)\vert^{2}\,dx=e^{2s}\int_{\mathbb{R}^N}|\nabla u|^2 dx. \end{equation} From the above equalities, fixing $\xi>2$, we have \begin{equation} \label{CONV1}
\vert \nabla \mathcal{H}(u, s) \vert_{2}\rightarrow 0 \quad \mbox{and} \quad |\mathcal{H}(u,s)|_{\xi} \to 0 \quad \mbox{as} \quad s \to -\infty. \end{equation} Hence, $$
\int_{\mathbb{R}^N}|F( \mathcal{H}(u, s))|\,dx \leq C_1\int_{\mathbb{R}^N}| \mathcal{H}(u, s)|^{q}\,dx+C_2\int_{\mathbb{R}^N}| \mathcal{H}(u, s)|^{2^*}\,dx \to 0 \quad \mbox{as} \quad s \to -\infty, $$ from where it follows that $$ J(\mathcal{H}(u, s))\rightarrow 0 \quad \mbox{as} \quad s\rightarrow -\infty, $$ showing $(i)$.
In order to show $(ii)$, note that by (\ref{CONVJ1*}), $$
|\nabla \mathcal{H}(u, s) |_{2}\rightarrow +\infty \quad \mbox{as} \quad s \to +\infty. $$ On the other hand, $$
J(\mathcal{H}(u, s)) \leq \frac{1}{2}|\nabla \mathcal{H}(u,s)|_2^2-\frac{\mu}{q}\int_{\mathbb{R}^N}|\mathcal{H}(u,s)|^{q}\,dx=\frac{e^{2s}}{2}\int_{\mathbb{R}^N}|\nabla u|^2 dx-\frac{\mu e^{\frac{(q-2)Ns}{2}}}{q}\int_{\mathbb{R}^N}|u(x)|^{q}\,dx. $$ Since $q>2+\frac{4}{N}$, the last inequality yields $$ J(\mathcal{H}(u, s))\rightarrow -\infty \quad \mbox{as} \quad s\rightarrow +\infty. $$ \end{proof}
\begin{lemma} \label{PJ1} There exists $K(a)>0$ small enough such that
$$
0<\sup_{u\in A} J(u)<\inf_{u\in B} J(u)
$$ with $$
A=\left\{u\in S(a), \int_{\mathbb{R}^N} |\nabla u|^2 dx\leq K(a) \right\}\quad \mbox{and} \quad B=\left\{u\in S(a), \int_{\mathbb{R}^N} |\nabla u|^2 dx=2K(a) \right\}. $$ \end{lemma} \begin{proof} We will need the following Gagliardo-Nirenberg inequality: for any $\xi> 2$, $$
|u|_\xi\leq C(\xi, N)|\nabla u|_2^{\gamma}|u|_2^{1-\gamma}, $$
where $\gamma=N(\frac{1}{2}-\frac{1}{\xi})$. If we fix $|\nabla u|^{2}_{2}\leq K(a)$ and $|\nabla v|^{2}_2=2K(a)$, we derive that $$
\int_{\mathbb{R}^N}F(u)\,dx \leq C_1|u|^{q}_{q}+C_2|u|^{2^*}_{2^*}. $$ Then, by the Gagliardo-Nirenberg inequality, $$
\int_{\mathbb{R}^N}F(v)\,dx \leq C_1(|\nabla v|_2^2)^{N(\frac{q-2}{4})}+C_2(|\nabla v|_2^2)^{N(\frac{2^*-2}{4})}. $$ Since $F(u)\geq 0$ for any $u\in H^{1}(\mathbb{R}^{N})$, we have \begin{eqnarray*}
J(v)-J(u) &=&\frac{1}{2}\int_{\mathbb{R}^N}|\nabla v|^{2}\,dx-\frac{1}{2}\int_{\mathbb{R}^N}|\nabla u|^{2}\,dx-\int_{\mathbb{R}^N}F(v)\,dx+\int_{\mathbb{R}^N}F(u)\,dx\\
&\geq &\frac{1}{2}\int_{\mathbb{R}^N}|\nabla v|^{2}\,dx-\frac{1}{2}\int_{\mathbb{R}^N}|\nabla u|^{2}\,dx-\int_{\mathbb{R}^N}F(v)\,dx, \end{eqnarray*} and so, $$ J(v)-J(u) \geq \frac{1}{2}K(a)-C_3K(a)^{N(\frac{q-2}{4})}-C_4 K(a)^{N(\frac{2^*-2}{4})}. $$ Thereby, fixing $K(a)$ small enough of such way that, $$ \frac{1}{2}K(a)-C_3K(a)^{N(\frac{q-2}{4})}-C_4 K(a)^{N(\frac{2^*-2}{4})}>0, $$ we get the desired result. \end{proof}
As a byproduct of the last lemma is the following corollary.
\begin{corollary} There exists $K(a)>0$ such that if $u \in S(a)$ and $|\nabla u|^2_{2}\leq K(a)$, then $J(u) >0$. \end{corollary} \begin{proof} Arguing as in the last lemma, $$
J(u) \geq \frac{1}{2}|\nabla u|_{2}^{2}-C_1|\nabla u|_2^{N(\frac{q-2}{2})}-C_2|\nabla u|_2^{N(\frac{2^*-2}{2})}>0, $$ for $K(a)$ small enough. \end{proof}
In what follows, we fix $u_0 \in S(a)$ and apply Lemma \ref{vicentej} to get two numbers $s_1<0$ and $s_2>0$, of such way that the functions $u_1=\mathcal{H}(s_1,u_0)$ and $u_2=\mathcal{H}(s_2,u_0)$ satisfy $$
|\nabla u_1|^2_2<\frac{K(a)}{2}, \,\, |\nabla u_2|_2^2>2K(a),\,\, J(u_1)>0\quad \mbox{and} \quad J(u_2)<0. $$
Now, following the ideas from Jeanjean \cite{jeanjean1}, we fix the following mountain pass level given by $$ \gamma_\mu(a)=\inf_{h \in \Gamma}\max_{t \in [0,1]}J(h(t)) $$ where $$ \Gamma=\left\{h \in C([0,1],S(a)): h(0)=u_1 \,\,\mbox{and} \,\, h(1)=u_2 \right\}. $$ From Lemma \ref{PJ1}, $$ \max_{t \in [0,1]}J(h(t))>\max \left\{J(u_1),J(u_2)\right\}. $$
\begin{lemma} \label{ESTMOUNTPASS} There holds $\displaystyle \lim_{\mu \to +\infty}\gamma_\mu(a)=0$.
\end{lemma} \begin{proof} In what follows we set the path $h_0(t)=\mathcal{H}((1-t)s_1+ts_2,u_0) \in \Gamma$. Then, $$
\gamma_\mu(a) \leq \max_{t \in [0,1]}J(h_0(t)) \leq \max_{r \geq 0}\left\{\frac{r^2}{2}|\nabla u_0|_{2}^{2}-\frac{\mu}{q}r^{\frac{N(q-2)}{2}}|u_0|_{q}^{q}\right\}, $$ and so, for some positive constant $C_2$, $$ \gamma_\mu(a) \leq C_2\left(\frac{1}{\mu}\right)^{\frac{4}{N(q-2)-4}} \to 0 \quad \mbox{as} \quad \mu \to +\infty. $$ Here, we have used the fact that $q>2+\frac{4}{N}$.
\end{proof}
In what follows $(u_n)$ denotes the $(PS)$ sequence associated with the level $\gamma_\mu(a)$, which is obtained by making $u_n=\mathcal{H}(v_n,s_n)$, where $(v_n,s_n)$ is the $(PS)$ sequence for $ \tilde{J} $ obtained by \cite[Proposition 2.2 ]{jeanjean1}, associated with the level $\gamma_\mu(a).$ More precisely, we have \begin{equation} \label{gamma(a)} J(u_n) \to \gamma_\mu(a) \quad \mbox{as} \quad n \to +\infty, \end{equation} and \begin{equation*} \label{der1}
\|J|'_{S(a)}(u_n)\| \to 0 \quad \mbox{as} \quad n \to +\infty. \end{equation*} Setting the functional $\Psi:H^{1}(\mathbb{R}^N) \to \mathbb{R}$ given by $$
\Psi(u)=\frac{1}{2}\int_{\mathbb{R}^N}|u|^2\,dx, $$ it follows that $S(a)=\Psi^{-1}(\{a^2/2\})$. Then, by Willem \cite[Proposition 5.12]{Willem}, there exists $(\lambda_n) \subset \mathbb{R}$ such that $$
||J'(u_n)-\lambda_n\Psi'(u_n)||_{H^{-1}} \to 0 \quad \mbox{as} \quad n \to +\infty. $$ Hence, \begin{equation} \label{EQ10}
-\Delta u_n-f(u_n)=\lambda_nu_n\ + o_n(1) \quad \mbox{in} \quad (H^{1}(\mathbb{R}^N))^*. \end{equation} Moreover, another important limit involving the sequence $(u_n)$ is \begin{equation} \label{EQ1**}
Q(u_n)=\int_{\mathbb{R}^N}|\nabla u_n|^2 dx +N \int_{\mathbb{R}^N} F(u_n) dx- \frac{N}{2}\int_{\mathbb{R}^N} f(u_n) u_n dx\to 0 \quad \mbox{as} \quad n \to +\infty, \end{equation} which is obtained using the limit below $$ {\partial_s}\tilde{J}(v_n,s_n) \to 0 \quad \mbox{as} \quad n \to +\infty, $$ that was also proved in \cite{jeanjean1}.
Arguing as in \cite[Lemmas 2.3 and 2.4]{jeanjean1}, we know that $(u_n)$ is a bounded sequence, and so, the number $\lambda_n$ must satisfy the equality below $$
\lambda_n=\frac{1}{|u_n|^{2}_{2}}\left\{ |\nabla u_n|^{2}_{2} -\int_{\mathbb{R}^N} f(u_n) u_n dx \right\}+o_n(1), $$ or equivalently, \begin{equation} \label{lambdan}
\lambda_n=\frac{1}{a^{2}}\left\{ |\nabla u_n|^{2}_{2} -\int_{\mathbb{R}^N} f(u_n) u_n dx \right\}+o_n(1). \end{equation}
\begin{lemma} \label{Limitacao}There exists $C>0$ such that $$ \limsup_{n \to +\infty}\int_{\mathbb{R}^N}F(u_n)\,dx \leq C \gamma_\mu(a) $$ and $$ \limsup_{n \to +\infty}\int_{\mathbb{R}^N}f(u_n)u_n\,dx \leq C\gamma_\mu(a). $$ \end{lemma} \begin{proof} From (\ref{gamma(a)}) and (\ref{EQ1**}) $$ N{J}(u_n)+Q(u_n)=N\gamma_\mu(a)+o_n(1), $$ then $$
\frac{N+2}{2}\int_{\mathbb{R}^N}|\nabla u_n|^2 dx - \frac{N}{2}\int_{\mathbb{R}^N} f(u_n) u_n dx=N\gamma_\mu(a)+o_n(1). $$ Using again (\ref{gamma(a)}), we get $$ \frac{N+2}{2}\left(2\int_{\mathbb{R}^N} F(u_n) dx+ 2\gamma_\mu(a)+o_n(1)\right) - \frac{N}{2}\int_{\mathbb{R}^N} f(u_n) u_n dx = N\gamma_\mu(a)+o_n(1), $$ that is, \begin{equation}\label{Ee} -(N+2)\int_{\mathbb{R}^N} F(u_n) dx+ \frac{N}{2}\int_{\mathbb{R}^N} f(u_n) u_n dx = 2\gamma_\mu(a)+o_n(1). \end{equation} Since $ q \in (2+\frac{4}{N}, 2^{*})$ and $
F(t)=\frac{\mu}{q}|t|^q+\frac{1}{2^*}|t|^{2^*}, \,\, \forall \, t \in \mathbb{R}^N, $ we obtain \begin{equation} \label{q} qF(t) \leq f(t)t, \,\, t \in\mathbb{R}. \end{equation} This together with (\ref{Ee}) yields $$ \left(\frac{qN}{2} -(N+2)\right)\int_{\mathbb{R}^N} F(u_n) dx\leq 2\gamma_\mu(a)+o_n(1), $$ and so, $$ \limsup_{n \to +\infty}\int_{\mathbb{R}^N}F(u_n)\,dx \leq C \gamma_\mu(a). $$ This inequality combined again with (\ref{Ee}) ensures that $$ \limsup_{n \to +\infty}\int_{\mathbb{R}^N}f(u_n)u_n\,dx \leq C \gamma_\mu(a). $$ \end{proof}
\begin{lemma} \label{goodest0} $\displaystyle \limsup_{n \to +\infty}|\nabla u_n|_{2}^{2} \leq C \gamma_\mu(a)$.
\end{lemma} \begin{proof} First of all, let us recall that
$$
\int_{\mathbb{R}^N}|\nabla u_n|^2\,dx=2\gamma_\mu(a)+2\int_{\mathbb{R}^N}F(u_n)\,dx+o_n(1).
$$
Then, from Lemma \ref{Limitacao},
$$
\limsup_{n \to +\infty}|\nabla u_n|_{2}^{2} \leq (2+C_{1})\gamma_\mu(a).
$$ \end{proof}
Now, from (\ref{Ee}), the sequence $(\int_{\mathbb{R}^N}F(u_n)\,dx) $ is bounded away from zero, otherwise we would have $$ \int_{\mathbb{R}^N}F(u_n)\,dx \to 0 \quad \mbox{as} \quad n \to +\infty, $$ which leads to $$ \int_{\mathbb{R}^N}f(u_n)u_n\,dx \to 0 \quad \mbox{as} \quad n \to +\infty. $$ These limits combined with (\ref{Ee}) imply that $\gamma_\mu(a)=0$, which is absurd. From this, in what follows we can assume that \begin{equation} \label{Fc1} \int_{\mathbb{R}^N}F(u_n)\,dx\rightarrow C_1>0,\ \mbox{as}\ n \rightarrow \infty. \end{equation}
\begin{lemma} \label{mu0} The sequence $(\lambda_n)$ is bounded with $$
\lambda_n = -\frac{\mu}{a^2}\Big(\frac{N}{q}-\frac{N-2}{2}\Big)\int_{\mathbb{R}^N}|u_n|^q \,dx+o_n(1) $$ and $$
\limsup_{n \to +\infty}|\lambda_n| \leq \frac{C}{a^2}\gamma_\mu(a), $$ for some $C>0$. \end{lemma} \begin{proof} The boundedness of $(u_n)$ yields that $(\lambda_n)$ is bounded, because
\begin{equation}\label{lambda}
\lambda_n a^2=\lambda_n |u_n|^{2}_{2}=|\nabla u_n|_{2}^{2}-\int_{\mathbb{R}^N}f(u_n)u_n\,dx +o_n(1),
\end{equation}
and so,
\begin{eqnarray*}
|\lambda_n| &\leq&\frac{1}{a^2}\left( |\nabla u_n|_{2}^{2}+\int_{\mathbb{R}^N}f(u_n)u_n\,dx\right) + o_n(1) \\
&\leq&\frac{C}{a^2}\gamma_\mu(a) + o_n(1).
\end{eqnarray*} This guarantees the boundedness of $(\lambda_n)$ and the second inequality is proved.
In order to prove the first inequality, we know by (\ref{EQ1**}) that
$$
|\nabla u_n|_{2}^{2}=\frac{N}{2}\int_{\mathbb{R}^N}f(u_n)u_n\,dx - N\int_{\mathbb{R}^N}F(u_n)\,dx + o_n(1).
$$
Inserting this equality in (\ref{lambda}), we obtain
$$\lambda_n a^2= -\mu\Big(\frac{N}{q}-\frac{N-2}{2}\Big)\int_{\mathbb{R}^N}|u_n|^q \,dx +o_n(1),$$ showing the first inequality. \end{proof}
In the sequel, we restrict our study to the space $H_{rad}^{1}(\mathbb{R}^N)$. Then, it is well known that \begin{equation} \label{convq}
\lim_{n \to +\infty}\int_{\mathbb{R}^N}|u_n|^q\,dx=\int_{\mathbb{R}^N}|u|^q\,dx, \end{equation} where $u_n \rightharpoonup u$ in $H_{rad}^{1}(\mathbb{R}^N)$, because $q \in (2+\frac{4}{N},2^*)$. \begin{lemma} There exists $\mu^*>0$ such that $ u\not=0$ for all $\mu \geq \mu^*>0$. \end{lemma} \begin{proof} Seeking for a contradiction, let us assume that $u=0$. Then, \begin{equation} \label{EQq}
\lim_{n \to +\infty}\int_{\mathbb{R}^N}|u_n|^q\,dx=0, \end{equation} and by Lemma \ref{mu0}, \begin{equation} \label{ln0} \limsup_{n \to +\infty} \lambda_n = 0. \end{equation} The equality \begin{equation*}
a^2\lambda_n=|\nabla u_n|^{2}_{2}-\int_{\mathbb{R}^N}f(u_n)u_n\,dx+o_n(1) \end{equation*} together with (\ref{EQq}) and (\ref{ln0}) leads to \begin{equation} \label{mu2}
|\nabla u_n|_{2}^{2}-|u_n|_{2^*}^{2^*}=o_n(1). \end{equation} In what follows, going to a subsequence, we assume that $$
|\nabla u_n|_{2}^{2}=L+o_n(1) \quad \mbox{and} \quad |u_n|_{2^*}^{2^*}=L+o_n(1). $$ We claim that $L>0$, otherwise if $L=0$, we must have \begin{equation} \label{mu2}
|\nabla u_n|_{2}^{2}=o_n(1), \end{equation} then, $$
|\nabla u_n|^{2}_{2} \to 0, $$ which is absurd, because $\gamma_\mu(a)>0$.
Since $L>0$, by definition of $S$ in \eqref{Sp*}, $$
S \leq \frac{ \int_{\mathbb{R}^N}|\nabla u_n|^{2}\,dx}{\left(\int_{\mathbb{R}^N}|u_n|^{2^*}\,dx\right)^{\frac{2}{2^*}}}. $$ Taking the $limsup$ as $n \to +\infty$, we obtain $$ S \leq \frac{ L}{L^{\frac{2}{2^*}}}, $$ that is, $$ L \geq S^{\frac{N}{2}}. $$ On the other hand $$
o_n(1)+\gamma_\mu(a)-\frac{a^2\lambda_n}{2}=\frac{1}{2}(|\nabla u_n|_{2}^{2}-a^2\lambda_n )-\frac{\mu}{q}|u_n|^{q}_{q}-\frac{1}{2^*}|u_n|^{2^*}_{2^*}=\frac{1}{N}L+o_n(1). $$
Recalling that $\displaystyle \limsup_{n \to +\infty}|\lambda_n| \leq \frac{C}{a^2}\gamma_\mu(a)$, it follows that $$ \frac{1}{N}S^{\frac{N}{2}}\leq C\gamma_\mu(a). $$ Now, fixing $\mu^*$ large enough of a such way that $$ C\gamma_\mu(a) < \frac{1}{N}S^{\frac{N}{2}}, \quad \forall \mu \geq \mu^*, $$ we get a new contradiction. This proves that $u\not=0$ for $\mu>0$ large enough.
\end{proof}
\begin{lemma} \label{crescimento} Increasing if necessary $\mu^*$, we have $u_n \to u$ in $L^{2^*}(\mathbb{R}^N)$ for all $\mu \geq \mu^*$.
\end{lemma} \begin{proof} Using the concentration-compactness principle due to Lions \cite{Lions}, we can find an at most countable index $\mathcal{J}$, sequences $(x_{i})\subset \mathbb{R}^{N}, ( \kappa_{i}), (\nu_{i})\subset (0, \infty)$ such that\\
\noindent $(i)$ \,\, $|\nabla u_n|^{2} \to \kappa$ weakly-$^*$ in the sense of measure \\ \noindent and \\
\noindent $(ii)$ \,\, $|u_n|^{2^*} \to \nu$ weakly-$^*$ in the sense of measure, \\ and $$ \left\{ \begin{array}{l}
(a)\quad \nu=|u|^{2^*}+\sum_{j \in J}\nu_j \delta_{x_j},\\
(b)\quad \kappa \geq |\nabla u|^{2}+\sum_{j \in J}\kappa_j \delta_{x_j},\\
(c)\quad S \nu_j^{\frac{2}{2^*}} \leq \kappa_j,\,\, \forall j\in \mathcal{J},\\ \end{array} \right. $$ where $\delta_{x_{j}}$ is the Dirac mass at the point $x_{j}$. Since $$ -\Delta u_n-f(u_n)=\lambda_n u_n +o_n(1)\quad \mbox{in} \quad (H^{1}(\mathbb{R}^N))^*, $$ we derive that $$
\int_{\mathbb{R}^N}\nabla u_n \nabla \phi \,dx-\lambda_n\int_{\mathbb{R}^N}u_n \phi \,dx=\mu \int_{\mathbb{R}^N}|u_n|^{q-2}u_n\phi\,dx+ \int_{\mathbb{R}^N}|u_n|^{2^*-2}u_n\phi\,dx, \quad \forall \phi \in H^{1}(\mathbb{R}^N). $$ Now, arguing as in \cite[Lemma 2.3]{GP}, $\mathcal{J}$ is empty or otherwise $\mathcal{J}$ is nonempty but finite. In the case that $\mathcal{J}$ is nonempty but finite, we must have $$ \kappa_j \geq {S^{\frac{N}{2}}}, \quad \forall j \in \mathcal{J}. $$ However, by Lemma \ref{goodest0}, $$
\limsup_{n \to +\infty}|\nabla u_n|_{2}^{2} \leq C \gamma_\mu(a). $$ Then, if $\mu^*>0$ is fixed of such way that $$ C \gamma_\mu(a) < \frac{1}{2}{S^{\frac{N}{2}}}, $$ we get a contradiction, and so, $\mathcal{J}= \emptyset$. From this, \begin{equation} \label{localconvergence} u_n \to u \quad \mbox{in} \quad L^{2^*}_{loc}(\mathbb{R}^N). \end{equation} \begin{claim} \label{convBR} For each $R>0$, we have $$ u_n \to u \quad \mbox{in} \quad L^{2^*}(\mathbb{R}^N \setminus B_R(0)). $$ \end{claim} \noindent Indeed, as $u_n \in H_{rad}^{1}(\mathbb{R}^N)$, we know that $$
|u_n(x)| \leq \frac{\|u_n\|}{|x|^{\frac{N-1}{2}}}, \quad \mbox{a.e. in} \quad \mathbb{R}^N. $$ Since $(u_n)$ is a bounded sequence in $H^{1}(\mathbb{R}^N)$, we obtain $$
|u_n(x)| \leq \frac{C}{|x|^{\frac{N-1}{2}}}, \quad \mbox{a.e. in} \quad \mathbb{R}^N, $$ and so, $$
|u_n(x)|^{2^*} \leq \frac{C_1}{|x|^{\frac{N(N-1)}{N-2}}}, \quad \mbox{a.e. in} \quad \mathbb{R}^N. $$
Recalling that $\frac{C_1}{|\,\cdot\,|^{\frac{N(N-1)}{N-2}}} \in L^{1}(\mathbb{R}^N \setminus B_R(0))$ and $u_n(x) \to u(x)$ a.e. in $\mathbb{R}^N \setminus B_R(0)$, the Lebesgue's Theorem gives $$ u_n \to u \quad \mbox{in} \quad L^{2^*}(\mathbb{R}^N \setminus B_R(0)), $$ showing the Claim \ref{convBR}. Now, the Claim \ref{convBR} combined with (\ref{localconvergence}) ensures that $$ u_n \to u \quad \mbox{in} \quad L^{2^*}(\mathbb{R}^N). $$
\end{proof}
\section{Proof of Theorem \ref{T1}} From the previous analysis the weak limit $u$ of $(u_n)$ is nontrivial. Therefore, by Lemma \ref{mu0} $$
\lim_{n \to +\infty} \lambda_n = -\frac{\mu }{a^2}\Big(\frac{N}{q}-\frac{N-2}{2}\Big)\lim_{n \to +\infty}\int_{\mathbb{R}^N}|u_n|^q \,dx=-\frac{\mu }{a^2}\Big(\frac{N}{q}-\frac{N-2}{2}\Big)\int_{\mathbb{R}^N}|u|^q \,dx<0. $$ So, we can assume without loss of generality that \begin{equation}\label{new} \lambda_n \to \lambda_a<0 \quad \mbox{as} \quad n \to +\infty. \end{equation} Now, the equality (\ref{EQ10}) and \eqref{new} imply that \begin{equation} \label{EQ2}
-\Delta u-f(u)=\lambda_au, \quad \mbox{in} \quad \mathbb{R}^N. \end{equation} Thus, $$
|\nabla u|_{2}^{2}-\lambda_a|u|_{2}^{2}=\int_{\mathbb{R}^N}f(u)u\,dx. $$ On the other hand, $$
|\nabla u_n|_{2}^{2}-\lambda_n|u_n|_{2}^{2}=\int_{\mathbb{R}^N}f(u_n)u_n\,dx+o_n(1), $$ or yet, $$
|\nabla u_n|_{2}^{2}-\lambda_a|u_n|_{2}^{2}=\int_{\mathbb{R}^N}f(u_n)u_n\,dx+o_n(1). $$ Recalling that by Lemma \ref{crescimento} $$ u_n \to u \quad \mbox{in} \quad L^{2^*}(\mathbb{R}^N), $$ and using the below limit $$ u_n \to u \quad \mbox{in} \quad L^{q}(\mathbb{R}^N), $$ we obtain $$ \lim_{n \to +\infty}\int_{\mathbb{R}^N}f(u_n)u_n\,dx=\int_{\mathbb{R}^N}f(u)u\,dx, $$ from where it follows that $$
\lim_{n \to +\infty}(|\nabla u_n|_{2}^{2}-\lambda_a|u_n|_{2}^{2})=|\nabla u|_{2}^{2}-\lambda_a|u|_{2}^{2}. $$ Since $\lambda_a<0$, the last equality implies that $$ u_n \to u \quad \mbox{in} \quad H^{1}(\mathbb{R}^N), $$
implying that $|u|_{2}^{2}=a$. This establishes the desired result.
\section{Normalized solutions: The exponential critical growth case for $N=2$}
In this section we shall deal with the case $N=2$, where $f$ has an exponential critical growth and $a \in (0,1)$. We start our study recalling that by $(f_1)$ and $(f_2)$, we know that fixed $q >2$, for any $\zeta>0$ and $\alpha>4\pi$, there exists a constant $C>0$, which depends on $q$, $\alpha$, $\zeta$, such that \begin{equation}
\label{1.2}
|f(t)|\leq\zeta |t|^{\tau}+C|t|^{q-1}(e^{\alpha t^{2}}-1) \text{ for all } t \in \mathbb{R} \end{equation} and, using $(f_3)$, we have \begin{equation}
\label{1.3}
|F(t)|\leq\zeta |t|^{\tau+1}+C|t|^{q}(e^{\alpha t^{2}}-1) \text{ for all } t \in \mathbb{R}. \end{equation} Moreover, it is easy to see that, by \eqref{1.2}, \begin{equation}
\label{1.4}
|f(t)t| \leq\zeta |t|^{\tau+1}+C\vert t\vert^{q}(e^{\alpha t^{2}}-1) \text{ for all } t\in\mathbb{R}. \end{equation}
Finally, let us recall the following version of Trudinger-Moser inequality as stated e.g. in \cite{Cao}. \begin{lemma}\label{Cao}
If $\alpha>0$ and $u\in H^{1}(\mathbb{R}^{2})$, then
\begin{equation*}
\int_{\mathbb{R}^{2}}(e^{\alpha u^{2}}-1)dx<+\infty.
\end{equation*}
Moreover, if $|\nabla u|_{2}^{2}\leq 1$, $| u|_{2}\leq M<+\infty$, and $0<\alpha< 4\pi$, then there exists a positive constant $C(M, \alpha)$, which depends only on $M$ and $\alpha$, such that
\begin{equation*}
\int_{\mathbb{R}^{2}}(e^{\alpha u^{2}}-1)dx\leq C(M, \alpha).
\end{equation*} \end{lemma}
\begin{lemma} \label{alphat11} Let $(u_{n})$ be a sequence in $H^{1}(\mathbb{R}^{2})$ with $u_n \in S(a)$ and
$$
\limsup_{n \to +\infty} |\nabla u_n |_{2}^{2} < 1-a^{2}.
$$
Then, there exist $t> 1$, $t$ close to 1, and $C > 0$ satisfying
\[
\int_{\mathbb{R}^{2}}\left(e^{4 \pi |u_n|^{2}} - 1 \right)^t dx \leq C, \,\,\,\,\forall\, n \in \mathbb{N}.
\]
\end{lemma} \begin{proof}
As
\[
\limsup_{n\rightarrow\infty} |\nabla u_n|_2^2 <1-a^{2} \quad \mbox{and} \quad |u_n|_2^{2}=a^2 <1,
\]
there exist $m >0$ and $n_0\in \mathbb{N}$ verifying
\[
\|u_n\|^{2} < m < 1,
\,\,\,\text{for any}\,\,\, n \geq n_0.
\]
Fix $t > 1$, with $t$ close to $1$ such that $t m < 1$ and
$$
\int_{\mathbb{R}^{2}}\left(e^{4 \pi|u_n|^{2}} -1 \right)^t dx
\leq \int_{\mathbb{R}^{2}}\left(e^{4 t m \pi(\frac{|u_n|}{||u_n||})^{2}} - 1 \right) dx,\,\,\text{for any}\,\, n \geq n_0,
$$
where we have used the inequality \begin{equation*}
\label{ineqe}
(e^{s}-1)^{t}\leq e^{ts}-1, \text{ for }t>1 \text{ and } s\geq 0.
\end{equation*}
Hence, by Lemma \ref{Cao}, there exists $C_{1}=C_{1}(t, m, a)>0$
$$
\int_{\mathbb{R}^{2}}\left(e^{4 \pi |u_n|^{2}} -1 \right)^t dx \leq C_{1} \, \,\,\,\,\forall\, n \geq n_0.
$$
Now, the lemma follows fixing
$$
C=\max\left\{C_1, \int_{\mathbb{R}^{2}}\left(e^{4 \pi |u_{1}|^{2}} -1 \right)^t dx,....,\int_{\mathbb{R}^{2}}\left(e^{4 \pi |u_{n_0}|^{2}} -1 \right)^t dx \right\}.
$$ \end{proof}
\begin{corollary} \label{Convergencia em limitados} Let $(u_{n})$ be a sequence in $H^{1}(\mathbb{R}^{2})$ with $u_n \in S(a)$ and
$$
\limsup_{n \to +\infty} |\nabla u_n |_2^{2} < 1-a^2.
$$
If $u_n \rightharpoonup u$ in $H^{1}(\mathbb{R}^{2})$ and $u_n(x) \to u(x)$ a.e in $\mathbb{R}^{2}$, then
$$
F(u_n) \to F(u) \,\, \mbox{in} \,\, L^{1}(B_R(0)), \,\,\text{for any} \,\, R>0.
$$ \end{corollary} \begin{proof} By \eqref{1.3}, fixed $q >2$, for any $\zeta>0$ and $\alpha>4\pi$, there exists a constant $C>0$, which depends on $q$, $\alpha$, $\zeta$, such that \begin{equation*}
|F(t)|\leq\zeta |t|^{\tau+1}+C|t|^{q}(e^{\alpha t^{2}}-1)\,\, \text{ for all }\, t \in \mathbb{R}, \end{equation*} from where it follows that,
\begin{equation} \label{Domina}
|F(u_n)| \leq \zeta|u_n|^{\tau+1}+C|u_n|^{q}(e^{\alpha |u_n|^{2}}-1), \quad \forall n \in \mathbb{N}.
\end{equation} with $$
\zeta|u_n|^{\tau+1}+C|u_n|^{q}(e^{\alpha \pi |u_n|^{2}}-1)\rightarrow \zeta |u|^{\tau+1}+C|u|^{q}(e^{\alpha |u|^{2}}-1)\,\text{ a.e. in }\, \mathbb{R}^2 \text{ as }n\to+\infty. $$ Similar to arguments in Lemma \ref{alphat11}, there exist $m >0$ and $n_0\in \mathbb{N}$ verifying
\[
\|u_n\|^{2} < m < 1,
\,\,\,\text{for any}\,\,\, n \geq n_0.
\] Choose $\alpha>4 \pi$ close to 4$\pi$, $t>1$ close to $1$ with $\alpha m t<4\pi$, and by \cite[Lemma A.1]{Willem},
there exists $\omega \in L^{qt'}(B_R(0))$ such that $\vert u_{n}\vert\leq \omega$ a.e. in $B_R(0)$, where $t'$ is the conjugate exponent of $t$. Thus, \begin{equation*}
|u_n|^{q}(e^{\alpha |u_n|^{2}}-1)\leq \omega^{q}(e^{\alpha |u_n|^{2}}-1), \quad \text{a.e. in}\, B_R(0),
\end{equation*} and \begin{equation}\label{neww1}
|F(u_n)| \leq \zeta|u_n|^{\tau+1}+C\omega^{q}(e^{\alpha |u_n|^{2}}-1), \quad \text{a.e. in}\, B_R(0).
\end{equation} Setting
$$
h_n(x)=C(e^{\alpha |u_n|^{2}}-1),
$$ We can argue as in the proof of Lemma \ref{alphat11}, there exists $C>0$ such that $$
h_n \in L^{t}(\mathbb{R}^{2}) \quad \mbox{and} \quad \sup_{n \in \mathbb{N}}|h_n|_{t}<+\infty,
$$ Therefore, for some subequence of $(u_n)$, still denoted by itself, we derive that
\begin{equation} \label{Newlimit}
h_n \rightharpoonup h=C(e^{\alpha |u|^{2}}-1), \,\, \mbox{in} \,\, L^{t}(\mathbb{R}^{2}).
\end{equation} \begin{claim} Now we show that
$$
\omega^{q}h_n \to \omega^{q} h \quad \mbox{in} \quad L^{1}(B_R(0)), \quad \forall R>0.
$$
\end{claim}
\noindent Indeed, for each $R>0$, we consider the characteristic function $\chi_R$ associated with $B_R(0) \subset \mathbb{R}^{2}$, that is,
$$
\chi_R(x)=
\left\{
\begin{array}{l}
1, \quad x \in B_{R}(0), \\
0, \quad x \in \mathbb{R}^{N} \setminus B_{R}(0),
\end{array}
\right.
$$
which belongs to $L^{qt'}(\mathbb{R}^{2})$. Thus, by the weak limit (\ref{Newlimit}),
$$
\int_{\mathbb{R}^{2}} \omega^{q}\chi_R h_n\,dx \to \int_{\mathbb{R}^{2}}\omega^{q}\chi_R h\,dx,
$$
or equivalently,
\begin{equation} \label{Neww2}
\int_{B_R(0)}\omega^{q} h_n\,dx \to \int_{B_R(0)}\omega^{q} h\,dx.
\end{equation} Hence, by $u_n \to u$ in $L^{\tau+1}(B_R)$,\,\,\eqref{neww1} and \eqref{Neww2}, applying a variant of the Lebesgue Dominated Convergence Theorem, we deduce that
$$
F(u_n) \to F(u) \quad \mbox{in} \quad L^{1}(B_R(0)).
$$
\end{proof}
The next lemma is crucial in our argument.
\begin{lemma} \label{convergencia} Let $(u_n) \subset H_{rad}^{1}(\mathbb{R}^{2})$ be a sequence with $u_n \in S(a)$ and
$$
\limsup_{n \to +\infty} |\nabla u_n |_2^{2} < 1-a^2.
$$
Then, there exists $\alpha$ close to $4\pi$, such that for all $q >2$,
$$
|u_n|^{q}(e^{\alpha |u_n(x)|^{2}}-1) \to |u|^{q}(e^{\alpha |u(x)|^{2}}-1) \,\, \mbox{in} \,\, L^{1}(\mathbb{R}^N).
$$
\end{lemma} \begin{proof}
Arguing as in Corollary \ref{Convergencia em limitados}, there are $\alpha>4\pi$ close to $4\pi$ and $t>1$ close to $1$ such that the sequence
$$
h_n(x)=(e^{\alpha |u_n(x)|^{2}}-1),
$$
is a bounded sequence in $L^{t}(\mathbb{R}^N)$. Therefore, for some subsequence of $(h_n)$, still denoted by itself, we derive that
\begin{equation*}
h_n \rightharpoonup h=(e^{\alpha |u|^{2}}-1) \quad \mbox{in} \quad L^{t}(\mathbb{R}^{2}).
\end{equation*}
For $t'=\frac{t}{t-1}$, we know that the embedding $H_{rad}^{1}(\mathbb{R}^N) \hookrightarrow L^{qt'}(\mathbb{R}^N)$ is compact, then
$$
u_n \to u \quad \mbox{in} \quad L^{qt'}(\mathbb{R}^N),
$$
and so,
$$
|u_n|^{q} \to |u|^{q} \quad \mbox{in} \quad L^{t'}(\mathbb{R}^N).
$$
Thus,
$$
\lim_{n \to +\infty}\int_{\mathbb{R}^{2}}|u_n|^{q}h_n(x)\,dx=\int_{\mathbb{R}^{2}}|u|^{q}h(x)\,dx,
$$
that is,
$$
\lim_{n \to +\infty}\int_{\mathbb{R}^{2}}|u_n|^{q}(e^{\alpha |u_n(x)|^{2}}-1)\,dx=\int_{\mathbb{R}^{2}}|u|^{q}(e^{\alpha |u(x)|^{2}}-1)\,dx.
$$
Since
$$
|u_n|^{q}(e^{\alpha |u_n(x)|^{2}}-1) \geq 0 \quad \mbox{and} \quad |u|^{q}(e^{\alpha |u(x)|^{2}}-1) \geq 0,
$$
the last limit gives
$$
|u_n|^{q}(e^{\alpha |u_n(x)|^{2}}-1) \to |u|^{q}(e^{\alpha |u(x)|^{2}}-1) \quad \mbox{in} \quad L^{1}(\mathbb{R}^2).
$$
\end{proof}
\begin{corollary} \label{Convergencia em limitados1} Let $(u_{n})$ be a sequence in $H_{rad}^{1}(\mathbb{R}^{2})$ with $u_n \in S(a)$ and
$$
\limsup_{n \to +\infty} |\nabla u_n |_2^{2} < 1-a^2.
$$
If $u_n \rightharpoonup u$ in $H^{1}(\mathbb{R}^{2})$ and $u_n(x) \to u(x)$ a.e in $\mathbb{R}^{2}$, then
$$
F(u_n) \to F(u) \quad \mbox{and} \quad f(u_n)u_n \to f(u)u \,\, \mbox{in} \,\, L^{1}(\mathbb{R}^2).
$$ \end{corollary} \begin{proof}By \eqref{1.3},
$$
|F(t)|\leq\zeta |t|^{\tau+1}+C|t|^{q}(e^{\alpha t^{2}}-1)\,\, \text{ for all }\, t \in \mathbb{R},
$$
where $\alpha>4\pi$ close to $4\pi$ and $q>2$ as in the last Lemma \ref{convergencia}. Therefore,
\begin{equation} \label{Domina1}
|F(u_n)| \leq \zeta |u_n|^{\tau+1}+C|u_n|^{q}(e^{\alpha |u_n|^{2}}-1).
\end{equation}
By Lemma \ref{convergencia},
$$
|u_n|^{q}(e^{\alpha |u_n(x)|^{2}}-1) \to |u|^{q}(e^{\alpha |u(x)|^{2}}-1) \quad \mbox{in} \quad L^{1}(\mathbb{R}^2),
$$
and by the compact embedding $H_{rad}^{1}(\mathbb{R}^N) \hookrightarrow L^{\tau+1}(\mathbb{R}^N)$,
$$
u_n \to u \quad \mbox{in} \quad L^{\tau+1}(\mathbb{R}^2).
$$
Now, we can use the Lebesgue's Theorem to conclude that
$$
F(u_n) \to F(u) \quad \mbox{in} \quad L^{1}(\mathbb{R}^2).
$$
A similar argument works to show that
$$
f(u_n)u_n \to f(u)u \quad \mbox{in} \quad L^{1}(\mathbb{R}^2).
$$
\end{proof} From now on, we will use the same notations introduced in Section 2 to apply our variational procedure, more precisely
\begin{itemize}
\item[\rm (1)] $S(a)=\{u \in H^{1}(\mathbb{R}^2)\,:\, | u |_2=a\, \}$ is the sphere of radius $a>0$ defined with the norm $|\,\,\,\,|_2$.
\item[\rm (2)] $J: H^{1}(\mathbb{R}^2)\rightarrow \mathbb{R}$ with
$$
J(u)=\frac{1}{2}\int_{\mathbb{R}^{2}} |\nabla u|^2 dx-\int_{\mathbb{R}^{2}} F(u)dx.
$$
\item[\rm (3)] $\mathcal{H}: H\rightarrow \mathbb{R}$ with
\begin{equation*}
\mathcal{H}(u, s)(x)=e^{s}u(e^{s}x).
\end{equation*}
\item[\rm (4)] $\tilde{J}: H\rightarrow \mathbb{R}$ with
$$
\tilde{J}(u, s)=\frac{e^{2s}}{2}\int_{\mathbb{R}^{2}} |\nabla u|^2 dx-\frac{1}{e^{2s}}\int_{\mathbb{R}^{2}} F(e^{s}u(x))dx.
$$
\end{itemize}
\subsection{The minimax approach}
We will prove that $\tilde{J}$ on $S(a)\times \mathbb{R}$ possesses a kind of mountain-pass geometrical structure.
\begin{lemma}\label{vicente} Assume that $(f_1)-(f_2)$ and ($f_3$) hold and let $u\in S(a)$ be arbitrary but fixed. Then we have:\\
\noindent (i) $\vert \nabla \mathcal{H}(u, s) \vert_{2}\rightarrow 0$ and $J(\mathcal{H}(u, s))\rightarrow 0$ as $s\rightarrow -\infty$;\\
\noindent (ii) $\vert \nabla \mathcal{H}(u, s) \vert_{2}\rightarrow +\infty$ and $J(\mathcal{H}(u, s))\rightarrow -\infty$ as $s\rightarrow +\infty$. \end{lemma}
\begin{proof} \mbox{} By a straightforward calculation, it follows that
\begin{equation} \label{CONV0}\int_{\mathbb{R}^2}\vert \mathcal{H}(u, s)(x)\vert^{2}\,dx=a^{2}, \quad \int_{\mathbb{R}^2}|\mathcal{H}(u, s)(x)|^{\xi}\,dx= e^{(\xi-2)s}\int_{\mathbb{R}^2}|u(x)|^{\xi}\,dx, \quad \forall \xi \geq 2,
\end{equation}
and
\begin{equation} \label{CONV1*}
\int_{\mathbb{R}^2}\vert \nabla \mathcal{H}(u, s)(x)\vert^{2}\,dx=e^{2s}\int_{\mathbb{R}^2}|\nabla u|^2 dx.
\end{equation}
From the above equalities, fixing $\xi>2$, we have
\begin{equation} \label{CONV1}
\vert \nabla \mathcal{H}(u, s) \vert_{2}\rightarrow 0 \quad \mbox{and} \quad |\mathcal{H}(u,s)|_{\xi} \to 0 \quad \mbox{as} \quad s \to -\infty.
\end{equation}
Thus, there are $s_1<0$ and $m \in (0,1)$ such that
$$
\| \mathcal{H}(u, s)\|^{2} \leq m, \quad \forall s \in (-\infty_1,s_1].
$$ By \eqref{1.3},
$$
|F(t)|\leq\zeta |t|^{\tau+1}+C|t|^{q}(e^{\alpha t^{2}}-1)\,\, \text{ for all }\, t \in \mathbb{R},
$$
where $\alpha>4\pi$ close to $4\pi$ and $q>2$ as in the last Lemma \ref{convergencia}. Hence,
$$
|F( \mathcal{H}(u, s))| \leq \zeta| \mathcal{H}(u, s)|^{\tau+1}+C| \mathcal{H}(u, s)|^{q}(e^{\alpha | \mathcal{H}(u, s)|^{2}}-1), \quad \forall s \in (-\infty,s_1].
$$
Using the H\"older's inequality together with Lemma \ref{Cao}, there exists $C=C(u,m)>0$
such that
$$
\int_{\mathbb{R}^2} (e^{\alpha | \mathcal{H}(u, s)|^{2}}-1)^{t}dx\leq C,
$$
and so,
$$
\int_{\mathbb{R}^2}|F( \mathcal{H}(u, s))|\,dx \leq \zeta\int_{\mathbb{R}^2}| \mathcal{H}(u, s)|^{\tau+1}\,dx+C_1\Big(\int_{\mathbb{R}^2}| \mathcal{H}(u, s)|^{qt'}\,dx\Big)^{1/t'}, \quad \forall s \in (-\infty,s_1],
$$
where $t'=\frac{t}{t-1}$, and $t>1$ is close to 1. Now, by using (\ref{CONV1}),
$$
\int_{\mathbb{R}^2}|F( \mathcal{H}(u, s))| \to 0 \quad \mbox{as} \quad s \to -\infty,
$$
from where it follows that
$$
J(\mathcal{H}(u, s))\rightarrow 0 \quad \mbox{as} \quad s\rightarrow -\infty,
$$
showing $(i)$.
In order to show $(ii)$, note that by (\ref{CONV1*}),
$$
|\nabla \mathcal{H}(u, s) |_{2}\rightarrow +\infty \quad \mbox{as} \quad s \to +\infty.
$$
On the other hand, by $(f_4)$,
$$
J(\mathcal{H}(u, s)) \leq \frac{1}{2}|\nabla \mathcal{H}(u,s)|_2^2-\frac{\mu}{p}\int_{\mathbb{R}^2}|\mathcal{H}(u,s)|^{p}\,dx=e^{2s}\int_{\mathbb{R}^2}|\nabla u|^2 dx-\frac{\mu e^{(p-2)s}}{p}\int_{\mathbb{R}^2}|u(x)|^{p}\,dx.
$$
Since $p>4$, the last inequality yields
$$
J(\mathcal{H}(u, s))\rightarrow -\infty \,\, \mbox{as} \,\, s\rightarrow +\infty.
$$
\end{proof}
\begin{lemma} \label{P1} Assume that $(f_1)-(f_3)$ hold. Then there exists $K(a)>0$ small enough such that
$$
0<\sup_{u\in A} J(u)<\inf_{u\in B} J(u)
$$
with
$$
A=\left\{u\in S(a), \int_{\mathbb{R}^2} |\nabla u|^2 dx\leq K(a) \right\}\quad \mbox{and} \quad B=\left\{u\in S(a), \int_{\mathbb{R}^2} |\nabla u|^2 dx=2K(a) \right\}.
$$ \end{lemma} \begin{proof}
We will need the following Gagliardo-Sobolev inequality: for any $\xi> 2$,
$$
|u|_\xi\leq C(\xi, 2)|\nabla u|_2^{\gamma}|u|_2^{1-\gamma},
$$
where $\gamma=2(\frac{1}{2}-\frac{1}{\xi})$. If we fix $K(a)<\frac{1-a^2}{2}$, $|\nabla u|^{2}_{2}\leq K(a)$ and $|\nabla v|^{2}_2=2K(a)$, the conditions $(f_1)-(f_2)$ combined with Lemma \ref{Cao} ensure that
$$
\int_{\mathbb{R}^2}F(u)\,dx \leq \zeta|u|^{\tau+1}_{\tau+1}+C_2|u|^{q}_{qt'}
$$
where $q>2$, $t'=\frac{t}{t-1}>1$, $t$ closed to 1. Then, by the Gagliardo-Sobolev inequality,
$$
\int_{\mathbb{R}^2}F(v)\,dx \leq C_1|\nabla v|_2^{\tau-1}+C_2|\nabla v|^{(q-2/t')}_{2}.
$$
From $(f_3)$, $F(u)\geq 0$ for any $u\in H^{1}(\mathbb{R}^{2})$, then
\begin{eqnarray*}
J(v)-J(u) &=&\frac{1}{2}\int_{\mathbb{R}^2}|\nabla v|^{2}\,dx-\frac{1}{2}\int_{\mathbb{R}^2}|\nabla u|^{2}\,dx-\int_{\mathbb{R}^2}F(v)\,dx+\int_{\mathbb{R}^2}F(u)\,dx\\
&\geq &\frac{1}{2}\int_{\mathbb{R}^2}|\nabla v|^{2}\,dx-\frac{1}{2}\int_{\mathbb{R}^2}|\nabla u|^{2}\,dx-\int_{\mathbb{R}^2}F(v)\,dx,
\end{eqnarray*}
and so,
$$
J(v)-J(u) \geq \frac{1}{2}K(a)-C_3K(a)^{\frac{\tau-1}{2}}-C_4 K(a)^{(\frac{q}{2}-\frac{1}{t'})}.
$$
Since $\tau>3$ and $t'>0$ with $\frac{q}{2}-\frac{1}{t'}>1$, decreasing $K(a)$ if necessary, it follows that
$$
\frac{1}{2}K(a)-C_3K(a)^{\frac{\tau-1}{2}}-C_4 K(a)^{(\frac{q}{2}-\frac{1}{t'})}>0,
$$
showing the desired result. \end{proof}
As a byproduct of the last lemma is the following corollary. \begin{corollary} \label{newcor}
There exists $K(a)>0$ small enough such that if $u \in S(a)$ and $|\nabla u|^2_{2}\leq K(a)$, then $J(u) >0$. \end{corollary} \begin{proof} Arguing as in the last lemma,
$$
J(u) \geq \frac{1}{2}|\nabla u|_{2}^{2}-C_1|\nabla u|_2^{\tau-1}-C_2|\nabla u|^{(q-\frac{2}{q^{*}})}_{2}>0,
$$
for $K(a)>0$ small enough. \end{proof}
In what follows, we fix $u_0 \in S(a)$ and apply Lemma \ref{vicente} and Corollary \ref{newcor} to get two numbers $s_1<0$ and $s_2>0$, of such way that the functions $u_1=\mathcal{H}(u_0, s_1)$ and $u_2=\mathcal{H}(u_0, s_2)$ satisfy $$
|\nabla u_1|^2_2<\frac{K(a)}{2},\,\, |\nabla u_2|_2^2>2K(a),\,\, J(u_1)>0\,\,\mbox{and} \,\, J(u_2)<0. $$
Now, following the ideas from Jeanjean \cite{jeanjean1}, we fix the following mountain pass level given by $$ \gamma_\mu(a)=\inf_{h \in \Gamma}\max_{t \in [0,1]}J(h(t)) $$ where $$ \Gamma=\left\{h \in C([0,1],S(a))\,:\,h(0)=u_1 \,\, \mbox{and} \,\, h(1)=u_2 \right\}. $$ From Lemma \ref{P1}, $$ \max_{t \in [0,1]}J(h(t))>\max \left\{J(u_1),J(u_2)\right\}. $$ \begin{lemma} \label{ESTMOUNTPASS} There holds $\displaystyle \lim_{\mu \to +\infty}\gamma_\mu(a)=0$.
\end{lemma} \begin{proof} In what follow we set the path $h_0(t)=\mathcal{H}\big(u_0, (1-t)s_1+ts_2\big) \in \Gamma$. Then, by $(f_4)$,
$$
\gamma_\mu(a) \leq \max_{t \in [0,1]}J(h_0(t)) \leq \max_{r \geq 0}\left\{\frac{r^{2}}{2}|\nabla u_{0}|_{2}^{2}-\frac{\mu}{p}r^{p-2}|u_{0}|_{p}^{p}\right\}
$$
and so,
$$
\gamma_\mu(a) \leq C_2\left(\frac{1}{\mu}\right)^{\frac{2}{p-4}} \to 0 \quad \mbox{as} \quad \mu \to +\infty,
$$
for some $C_2>0$. Here, we have used the fact that $p>4$.
\end{proof}
Arguing as Section 2, in what follows $(u_n)$ denotes the $(PS)$ sequence associated with the level $\gamma_\mu(a)$, which satisfies: \begin{equation} \label{gamma(a)1}
J(u_n) \to \gamma_\mu(a), \,\, \mbox{as} \,\, n \to +\infty, \end{equation} \begin{equation} \label{EQ101}
-\Delta u_n-f(u_n)=\lambda_nu_n\ + o_n(1), \,\, \mbox{in} \,\, (H^{1}(\mathbb{R}^2))^*, \end{equation} for some sequence $(\lambda_n) \subset \mathbb{R}$, and \begin{equation} \label{EQ1**1}
Q(u_n)=\int_{\mathbb{R}^2}|\nabla u_n|^2 dx +2 \int_{\mathbb{R}^2} F(u_n) dx- \int_{\mathbb{R}^2} f(u_n) u_n dx\to 0,\,\, \mbox{as} \,\, n \to +\infty. \end{equation} Moreover, $(u_n)$ is a bounded sequence, and so, the number $\lambda_n$ must satisfy the equality below \begin{equation} \label{lambdan1}
\lambda_n=\frac{1}{a^{2}}\left\{ |\nabla u_n|^{2}_{2} -\int_{\mathbb{R}^2} f(u_n) u_n dx \right\}+o_n(1). \end{equation}
\begin{lemma}\label{newlem1} There holds
$$
\limsup_{n \to +\infty}\int_{\mathbb{R}^2}F(u_n)\,dx \leq \frac{2}{\theta-4}\gamma_\mu(a).
$$ \end{lemma} \begin{proof} Using the fact that $J(u_n)=\gamma_\mu(a)+o_n(1)$ and $Q(u_n)=o_n(1)$, it follows that
$$
2{J}(u_n)+Q(u_n)=2\gamma_\mu(a)+o_n(1),
$$
and so,
\begin{eqnarray}\label{equa1}
2|\nabla u_n|_{2}^{2}-\int_{\mathbb{R}^2}f(u_n)u_n\,dx=2\gamma_\mu(a)+o_n(1).
\end{eqnarray} Using that $J(u_n)=\gamma_\mu(a)+o_n(1)$, we get $$ 4\int_{\mathbb{R}^2} F(u_n) dx+4\gamma_\mu(a)+o_n(1)-\int_{\mathbb{R}^2}f(u_n)u_n\,dx=2\gamma_\mu(a)+o_n(1). $$ Hence, by $(f_3)$, $$ 2\gamma_\mu(a)+o_n(1)=\int_{\mathbb{R}^2}f(u_n)u_n\,dx-4\int_{\mathbb{R}^2} F(u_n) dx\geq (\theta-4)\int_{\mathbb{R}^2} F(u_n) dx. $$ Since $\theta>4$, we have $$
\limsup_{n \to +\infty}\int_{\mathbb{R}^2}F(u_n)\,dx \leq \frac{2}{\theta-4}\gamma_\mu(a). $$ \end{proof}
\begin{lemma} \label{goodest} The sequence $(u_n)$ satisfies $\displaystyle \limsup_{n \to +\infty}|\nabla u_n|_{2}^{2} \leq \frac{2(\theta-2)}{\theta-4} \gamma_\mu(a)$. Hence, there exists $\mu^*>0$ such that
$$
\limsup_{n \to +\infty}|\nabla u_n|_{2}^{2}<1-a^2, \,\,\text{for any} \,\,\,\mu \geq \mu^*.
$$
\end{lemma} \begin{proof} Since $J(u_n)=\gamma_\mu(a)+o_n(1)$, we have
$$
\int_{\mathbb{R}^2}|\nabla u_n|^2\,dx=2\gamma_\mu(a)+2\int_{\mathbb{R}^2}F(u_n)\,dx+o_n(1).
$$
Thereby, by Lemma \ref{newlem1}, $$
\limsup_{n \to +\infty}|\nabla u_n|_{2}^{2} \leq \frac{2(\theta-2)}{\theta-4} \gamma_\mu(a).
$$ \end{proof}
\begin{lemma} \label{mu} Fix $\mu \geq \mu^*$, where $\mu^*$ is given in Lemma \ref{goodest}. Then, $(\lambda_n)$ is a bounded sequence with
$$
\limsup_{n \to +\infty} |\lambda_n| \leq \frac{2\theta}{a^2(\theta-4)}\gamma_\mu(a) \quad \mbox{and} \quad \limsup_{n \to +\infty} \lambda_n =-\frac{2}{a^2}\liminf_{n \to +\infty}\int_{\mathbb{R}^2}F(u_n)\,dx.
$$ \end{lemma} \begin{proof} The boundedness of $(u_n)$ yields that $(\lambda_n)$ is bounded, because
$$
\lambda_n|u_n|_{2}^{2} =|\nabla u_n|_{2}^{2}-\int_{\mathbb{R}^2}f(u_n)u_n\,dx+o_n(1),
$$
and as $|u_n|_{2}^{2}=a^2$, we have
$$
\lambda_na^2=|\nabla u_n|_{2}^{2}-\int_{\mathbb{R}^2}f(u_n)u_n\,dx+o_n(1).
$$
Hence,
$$
|\lambda_n|a^2 \leq |\nabla u_n|_{2}^{2}+\int_{\mathbb{R}^2}f(u_n)u_n\,dx+o_n(1).
$$
The limit (\ref{EQ1**1}) together with Lemmas \ref{newlem1} and \ref{goodest} ensures that $(\int_{\mathbb{R}^2}f(u_n)u_n\,dx)$ is bounded with $$ \limsup_{n \to +\infty}\int_{\mathbb{R}^2}f(u_n)u_n\,dx\leq \frac{2\theta}{\theta-4} \gamma_\mu(a). $$ This is enough to conclude that $(\lambda_n)$ is a bounded sequence with $$
\limsup_{n \to +\infty} |\lambda_n| \leq \frac{2\theta}{a^2(\theta-4)}\gamma_\mu(a). $$ In order to prove the second inequality, the equality
$$
\lambda_na^2=|\nabla u_n|_{2}^{2}-\int_{\mathbb{R}^2}f(u_n)u_n\,dx+o_n(1)
$$
together with the limit (\ref{EQ1**1}) leads to
$$
\lambda_na^2=-2\int_{\mathbb{R}^2}F(u_n)\,dx+o_n(1),
$$
showing the desired result. \end{proof}
Now, we restrict our study to the space $H_{rad}^{1}(\mathbb{R}^2)$. For any $\mu \geq \mu^*$, using Lemmas \ref{convergencia} and \ref{goodest}, Corollary \ref{Convergencia em limitados1}, it follows that $$ \lim_{n \to +\infty}\int_{\mathbb{R}^2}f(u_n)u_n\,dx=\int_{\mathbb{R}^2}f(u)u\,dx, $$ and $$ \lim_{n \to +\infty}\int_{\mathbb{R}^2}F(u_n)\,dx=\int_{\mathbb{R}^2}F(u)\,dx, $$ where $u_n \rightharpoonup u$ in $H^{1}(\mathbb{R}^2)$. The last limit implies that $u \not=0$, because otherwise, Corollary \ref{Convergencia em limitados1} gives $$ \lim_{n \to +\infty}\int_{\mathbb{R}^2}F(u_n)\,dx=\lim_{n \to +\infty}\int_{\mathbb{R}^2}f(u_n)u_n\,dx=0, $$ and by Lemma \ref{mu}, $$ \limsup_{n \to +\infty} \lambda_n \leq 0. $$
Since $(u_n)$ is bounded in $H^{1}(\mathbb{R}^2)$ and $\displaystyle \limsup_{n \to +\infty}|\nabla u_n|_{2}^{2}<1-a^2$ if $\mu \geq \mu^*$, Corollary \ref{Convergencia em limitados1} together with $(f_1)-(f_2)$ and the equality below \begin{equation*}
\lambda_n| u_n|_{2}^{2}=|\nabla u_n|^{2}_{2}-\int_{\mathbb{R}^2}f(u_n)u_n\,dx+o_n(1), \end{equation*} lead to \begin{equation} \label{mu2}
\lambda_na^2=|\nabla u_n|_{2}^{2}+o_n(1). \end{equation} From this, $$
0 \geq \limsup_{n \to +\infty} \lambda_n a^2= \limsup_{n \to +\infty} |\nabla u_n|^{2}_{2} \geq \liminf_{n \to +\infty} |\nabla u_n|^{2}_{2}\geq 0, $$ then $$
|\nabla u_n|^{2}_{2} \to 0, $$ which is absurd, because $\gamma_\mu(a)>0$.
\section{Proof of Theorem \ref{T2}}
The above analysis ensures that the weak limit $u$ of $(u_n)$ is nontrivial. Moreover, the equality $$ \limsup_{n \to +\infty} \lambda_n =-\frac{2}{a^2}\liminf_{n \to +\infty}\int_{\mathbb{R}^2}F(u_n)\,dx $$ ensures that $$ \limsup_{n \to +\infty} \lambda_n =-\frac{2}{a^2}\liminf_{n \to +\infty}\int_{\mathbb{R}^2}F(u)\,dx<0. $$ From this, for some subsequence, still denoted by $(\lambda_n)$, we can assume that $$ \lambda_n \to \lambda_a<0 \quad \mbox{as} \quad n \to +\infty. $$ Now, the equality (\ref{EQ101}) implies that \begin{equation} \label{EQ21}
-\Delta u-f(u)=\lambda_au \quad \mbox{in} \quad \mathbb{R}^2. \end{equation} Thus, $$
|\nabla u|_{2}^{2}-\lambda_a|u|_{2}^{2}=\int_{\mathbb{R}^2}f(u)u\,dx. $$ On the other hand, $$
|\nabla u_n|_{2}^{2}-\lambda_n|u_n|_{2}^{2}=\int_{\mathbb{R}^2}f(u_n)u_n\,dx+o_n(1), $$ or yet, $$
|\nabla u_n|_{2}^{2}-\lambda_a|u_n|_{2}^{2}=\int_{\mathbb{R}^2}f(u_n)u_n\,dx+o_n(1). $$ Recalling that $$ \lim_{n \to +\infty}\int_{\mathbb{R}^2}f(u_n)u_n\,dx=\int_{\mathbb{R}^2}f(u)u\,dx, $$ we derive that $$
\lim_{n \to +\infty}(|\nabla u_n|_{2}^{2}-\lambda_a|u_n|_{2}^{2})=|\nabla u|_{2}^{2}-\lambda_a|u|_{2}^{2}. $$ Since $\lambda_a<0$, the last limit implies that $$ u_n \to u \quad \mbox{in} \quad H^{1}(\mathbb{R}^2), $$
implying that $|u|_{2}^{2}=a$. This establishes the desired result.\\
\noindent \textsc{Claudianor O. Alves } \\ Unidade Acad\^{e}mica de Matem\'atica\\ Universidade Federal de Campina Grande \\ Campina Grande, PB, CEP:58429-900, Brazil \\ \texttt{[email protected]} \\ \noindent and \\ \noindent \textsc{Chao Ji} \\ Department of Mathematics\\ East China University of Science and Technology \\ Shanghai 200237, PR China \\ \texttt{[email protected]}\\ \noindent and \\ \noindent \textsc{Ol\'{i}mpio Hiroshi, Miyagaki} \\ Departamento de Matem\'{a}tica \\ Universidade Federal de S\~ao Carlos \\ S\~ao Carlos, SP, CEP:13565-905, Brazil\\ \texttt{[email protected]}
\end{document}
|
arXiv
|
Logical reduction of metarules
Andrew Cropper ORCID: orcid.org/0000-0002-4543-71991 &
Sophie Tourret2
Machine Learning (2019)Cite this article
Many forms of inductive logic programming (ILP) use metarules, second-order Horn clauses, to define the structure of learnable programs and thus the hypothesis space. Deciding which metarules to use for a given learning task is a major open problem and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules, so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this paper, we study whether fragments of metarules can be logically reduced to minimal finite subsets. We consider two traditional forms of logical reduction: subsumption and entailment. We also consider a new reduction technique called derivation reduction, which is based on SLD-resolution. We compute reduced sets of metarules for fragments relevant to ILP and theoretically show whether these reduced sets are reductions for more general infinite fragments. We experimentally compare learning with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules. In general, derivation reduced sets of metarules outperform subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times.
Many forms of inductive logic programming (ILP) (Albarghouthi et al. 2017; Campero et al. 2018; Cropper and Muggleton 2019; Emde et al. 1983; Evans and Grefenstette 2018; Flener 1996; Kaminski et al. 2018; Kietz and Wrobel 1992; Muggleton et al. 2015; De Raedt and Bruynooghe 1992; Si et al. 2018; Wang et al. 2014) use second-order Horn clauses, called metarules.Footnote 1 as a form of declarative bias (De Raedt 2012). Metarules define the structure of learnable programs which in turn defines the hypothesis space. For instance, to learn the grandparent/2 relation given the parent/2 relation, the chain metarule would be suitable:
$$\begin{aligned} P(A,B) \leftarrow Q(A,C), R(C,B) \end{aligned}$$
In this metaruleFootnote 2 the letters P, Q, and R denote existentially quantified second-order variables (variables that can be bound to predicate symbols) and the letters A, B and C denote universally quantified first-order variables (variables that can be bound to constant symbols). Given the chain metarule, the background parent/2 relation, and examples of the grandparent/2 relation, ILP approaches will try to find suitable substitutions for the existentially quantified second-order variables, such as the substitutions {P/grandparent, Q/parent, R/parent} to induce the theory:
$$\begin{aligned} grandparent (A,B) \leftarrow parent (A,C), parent (C,B) \end{aligned}$$
However, despite the widespread use of metarules, there is little work determining which metarules to use for a given learning task. Instead, suitable metarules are assumed to be given as part of the background knowledge, and are often used without any theoretical justification. Deciding which metarules to use for a given learning task is a major open challenge (Cropper 2017; Cropper and Muggleton 2014) and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules (Cropper and Muggleton 2014; Lin et al. 2014), so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. For instance, it is impossible to learn the grandparent/2 relation using only metarules with monadic predicates.
In this paper, we study whether potentially infinite fragments of metarules can be logically reduced to minimal, or irreducible, finite subsets, where a fragment is a syntactically restricted subset of a logical theory (Bradley and Manna 2007).
Cropper and Muggleton (2014) first studied this problem. They used Progol's entailment reduction algorithm (Muggleton 1995) to identify entailment reduced sets of metarules, where a clause C is entailment redundant in a clausal theory \(T \cup \{C\}\) when \(T \models C\). To illustrate entailment redundancy, consider the following first-order clausal theory \(T_1\), where p, q, r, and s are first-order predicates:
$$\begin{aligned} \begin{aligned} C_1&= p(A,B) \leftarrow q(A,B) \\ C_2&= p(A,B) \leftarrow q(A,B),r(A) \\ C_3&= p(A,B) \leftarrow q(A,B),r(A),s(B,C) \end{aligned} \end{aligned}$$
In \(T_1\) the clauses \(C_2\) and \(C_3\) are entailment redundant because they are both logical consequences of \(C_1\), i.e. \(\{C_1\} \models \{C_2,C_3\}\). Because \(\{C_1\}\) cannot be reduced, it is a minimal entailment reduction of \(T_1\).
Cropper and Muggleton showed that in some cases as few as two metarules are sufficient to entail an infinite fragment of chainedFootnote 3 second-order dyadic Datalog (Cropper and Muggleton 2014). They also showed that learning with minimal sets of metarules improves predictive accuracies and reduces learning times compared to non-minimal sets. To illustrate how a finite subset of metarules could entail an infinite set, consider the set of metarules with only monadic literals and a single first-order variable A:
$$\begin{aligned} \begin{aligned}&M_1 = P(A) \leftarrow T_1(A)\\&M_2 = P(A) \leftarrow T_1(A),T_2(A)\\&M_3 = P(A) \leftarrow T_1(A),T_2(A),T_3(A)\\&\dots \\&M_{n} = P(A) \leftarrow T_1(A),T_2(A),\dots ,T_{n}(A)\\&\dots \\ \end{aligned} \end{aligned}$$
Although this set is infinite it can be entailment reduced to the single metarule \(M_1\) because it implies the rest of the theory.
However, in this paper, we claim that entailment reduction is not always the most appropriate form of reduction. For instance, suppose you want to learn the father/2 relation given the background relations parent/2, male/1, and female/1. Then a suitable hypothesis is:
$$\begin{aligned} father (A,B) \leftarrow parent (A,B), male (A) \end{aligned}$$
To learn such a hypothesis one would need a metarule of the form \(P(A,B) \leftarrow Q(A,B),R(A)\). Now suppose you have the metarules:
$$\begin{aligned} \begin{aligned} M_1&= P(A,B) \leftarrow Q(A,B) \\ M_2&= P(A,B) \leftarrow Q(A,B),R(A) \end{aligned} \end{aligned}$$
Running entailment reduction on these metarules would remove \(M_2\) because it is a logical consequence of \(M_1\). However, it is impossible to learn the intended father/2 relation given only \(M_1\). As this example shows, entailment reduction can be too strong because it can remove metarules necessary to specialise a clause, where \(M_2\) can be seen as a specialisation of \(M_1\).
To address this issue, we introduce derivation reduction, a new form of reduction based on derivations, which we claim is a more suitable form of reduction for reducing sets of metarules. Let \(\vdash \) represent derivability in SLD-resolutionFootnote 4 (Kowalski 1974), then a Horn clause C is derivationally redundant in a Horn theory \(T \cup \{C\}\) when \(T \vdash C\). A Horn theory is derivationally irreducible if it contains no derivationally redundant clauses. To illustrate the difference between entailment and derivation reduction, consider the metarules:
$$\begin{aligned} \begin{aligned} M_1&= P(A,B) \leftarrow Q(A,B)\\ M_2&= P(A,B) \leftarrow Q(A,B),R(A)\\ M_3&= P(A,B) \leftarrow Q(A,B),R(A,B)\\ M_4&= P(A,B) \leftarrow Q(A,B),R(A,B),S(A,B) \end{aligned} \end{aligned}$$
Running entailment reduction on these metarules would result in the reduction \(\{M_1\}\) because \(M_1\) entails the rest of the theory. Likewise, running subsumption reduction (Plotkin 1971) (described in detail in Sect. 3.5) would also result in the reduction \(\{M_1\}\). By contrast, running derivation reduction would only remove \(M_4\) because it can be derived by self-resolving \(M_3\). The remaining metarules \(M_2\) and \(M_3\) are not derivationally redundant because there is no way to derive them from the other metarules.
In the rest of this paper, we study whether fragments of metarules relevant to ILP can be logically reduced to minimal finite subsets. We study three forms of reduction: subsumption (Robinson 1965), entailment (Muggleton 1995), and derivation (Cropper and Tourret 2018). We also study how learning with reduced sets of metarules affects learning performance. To do so, we supply Metagol (Cropper and Muggleton 2016b), a meta-interpretive learning (MIL) (Cropper and Muggleton 2016a; Muggleton et al. 2014, 2015) implementation, with different reduced sets of metarules and measure the resulting learning performance on three domains: Michalski trains (Larson and Michalski 1977), string transformations, and game rules (Cropper et al. 2019). In general, using derivation reduced sets of metarules outperforms using subsumption and entailment reduced sets, both in terms of predictive accuracies and learning times. Overall, our specific contributions are:
We describe the logical reduction problem (Sect. 3).
We describe subsumption and entailment reduction, and introduce derivation reduction, the problem of removing derivationally redundant clauses from a clausal theory (Sect. 3).
We study the decidability of the three reduction problems and show, for instance, that the derivation reduction problem is undecidable for arbitrary Horn theories (Sect. 3).
We introduce two general reduction algorithms that take a reduction relation as a parameter. We also study their complexity (Sect. 4).
We run the reduction algorithms on finite sets of metarules to identify minimal sets (Sect. 5).
We theoretically show whether infinite fragments of metarules can be logically reduced to finite sets (Sect. 5).
We experimentally compare the learning performance of Metagol when supplied with reduced sets of metarules on three domains: Michalski trains, string transformations, and game rules (Sect. 6).
This section describes work related to this paper, mostly work on logical reduction techniques. We first, however, describe work related to MIL and metarules.
Meta-interpretive learning
Although the study of metarules has implications for many ILP approaches (Albarghouthi et al. 2017; Campero et al. 2018; Cropper and Muggleton 2019; Emde et al. 1983; Evans and Grefenstette 2018; Flener 1996; Kaminski et al. 2018; Kietz and Wrobel 1992; Muggleton et al. 2015; De Raedt and Bruynooghe 1992; Si et al. 2018; Wang et al. 2014), we focus on meta-interpretive learning (MIL), a form of ILP based on a Prolog meta-interpreter.Footnote 5 The key difference between a MIL learner and a standard Prolog meta-interpreter is that whereas a standard Prolog meta-interpreter attempts to prove a goal by repeatedly fetching first-order clauses whose heads unify with a given goal, a MIL learner additionally attempts to prove a goal by fetching second-order metarules, supplied as background knowledge (BK), whose heads unify with the goal. The resulting meta-substitutions are saved and can be reused in later proofs. Following the proof of a set of goals, a logic program is formed by projecting the meta-substitutions onto their corresponding metarules, allowing for a form of ILP which supports predicate invention and learning recursive theories.
Most existing work on MIL has assumed suitable metarules as input to the problem, or has used metarules without any theoretical justification. In this paper, we try to address this issue by identifying minimal sets of metarules for interesting fragments of logic, such as Datalog, from which a MIL system can theoretically learn any logic program.
Metarules
McCarthy (1995) and Lloyd (2003) advocated using second-order logic to represent knowledge. Similarly, Muggleton et al. (2012) argued that using second-order representations in ILP provides more flexible ways of representing BK compared to existing methods. Metarules are second-order Horn clauses and are used as a form of declarative bias (Nédellec et al. 1996; De Raedt 2012) to determine the structure of learnable programs which in turn defines the hypothesis space. In contrast to other forms of declarative bias, such as modes (Muggleton 1995) or grammars (Cohen 1994), metarules are logical statements that can be reasoned about, such as to reason about the redundancy of sets of metarules, which we explore in this paper.
Metarules were introduced in the Blip system (Emde et al. 1983). Kietz and Wrobel (1992) studied generality measures for metarules in the RDT system. A generality order is necessary because the RDT system searches the hypothesis space (which is defined by the metarules) in a top-down general-to-specific order. A key difference between RDT and MIL is that whereas RDT requires metarules of increasing complexity (e.g. rules with an increasing number of literals in the body), MIL derives more complex metarules through SLD-resolution. This point is important because this ability allows MIL to start from smaller sets of primitive metarules. In this paper we try to identify such primitive sets.
Using metarules to build a logic program is similar to the use of refinement operators in ILP (Nienhuys-Cheng and de Wolf 1997; Shapiro 1983) to build a definite clause literal-by-literal.Footnote 6 As with refinement operators, it seems reasonable to ask about completeness and irredundancy of a set of metarules, which we explore in this paper.
Logical redundancy
Detecting and eliminating redundancy in a clausal theory is useful in many areas of computer science. In ILP logically reducing a theory is useful to remove redundancy from a hypothesis space to improve learning performance (Cropper and Muggleton 2014; Fonseca et al. 2004). In general, simplifying or reducing a theory often makes a theory easier to understand and use, and may also have computational efficiency advantages.
Literal redundancy
Plotkin (1971) used subsumption to decide whether a literal is redundant in a first-order clause. Joyner (1976) independently investigated the same problem, which he called clause condensation, where a condensation of a clause C is a minimum cardinality subset \(C'\) of C such that \(C' \models C\). Gottlob and Fermüller (1993) improved Joyner's algorithm and also showed that determining whether a clause is condensed is co-NP-complete. In contrast to removing redundant literals, we focus on removing redundant clauses.
Clause redundancy
Plotkin (1971) introduced methods to decide whether a clause is subsumption redundant in a first-order clausal theory. This problem has also been extensively studied in the context of first-order logic with equality due to its application in superposition-based theorem proving (Hillenbrand et al. 2013; Weidenbach and Wischnewski 2010). The same problem, and slight variants, has been extensively studied in the propositional case (Liberatore 2005, 2008). Removing redundant clauses has numerous applications, such as to improve the efficiency of SAT (Heule et al. 2015). In contrast to these works, we focus on reducing theories formed of second-order Horn clauses (without equality), which to our knowledge has not yet been extensively explored. Another difference is that we additionally study redundancy based on SLD-derivations.
Cropper and Muggleton (2014) used Progol's entailment-reduction algorithm (Muggleton 1995) to identify irreducible sets of metarules. Their approach removed entailment redundant clauses from sets of metarules. They identified theories that are (1) entailment complete for certain fragments of second-order Horn logic, and (2) irreducible. They demonstrated that in some cases as few as two clauses are sufficient to entail an infinite theory. However, they only considered small and highly constrained fragments of metarules. In particular, they focused on an exactly-two-connected fragment of metarules where each literal is dyadic and each first-order variable appears exactly twice in distinct literals. However, as discussed in the introduction, entailment reduction is not always the most appropriate form of reduction because it can remove metarules necessary to specialise a clause. Therefore, in this paper, we go beyond entailment reduction and introduce derivation reduction. We also consider more general fragments of metarules, such as a fragment of metarules sufficient to learn Datalog programs.
Cropper and Tourret (2018) introduced the derivation reduction problem and studied whether sets of metarules could be derivationally reduced. They considered the exactly-two-connected fragment previously considered by Cropper and Muggleton and a two-connected fragment in which every variable appears at least twice, which is analogous to our singleton-free fragment (Sect. 5.3). They used graph theoretic methods to show that certain fragments could not be completely derivationally reduced. They demonstrated on the Michalski trains dataset that the partially derivationally reduced set of metarules outperforms the entailment reduced set. In similar work Cropper and Tourret elaborated on their graph theoretic techniques and expanded the results to unconstrained resolution (Tourret and Cropper 2019).
In this paper, we go beyond the work of (Cropper and Tourret 2018) in several ways. First, we consider more general fragments of metarules, including connected and Datalog fragments. We additionally consider fragments with zero arity literals. In all cases we provide additional theoretical results showing whether certain fragments can be reduced, and, where possible, show the actual reductions. Second, Tourret and Cropper (2019) focused on derivation reduction modulo first-order variable unification, i.e. they considered the case where factorisation (Nienhuys-Cheng and de Wolf 1997) was allowed when resolving two clauses, which is not implemented in practice in current MIL systems. For this reason, although Section 5 in Tourret and Cropper (2019) and Sect. 5.1 in the present paper seemingly consider the same problem, the results are opposite to one another. Third, in addition to entailment and derivation reduction, we also consider subsumption reduction. We provide more theoretical results on the decidability of the reduction problems, such as showing a decidable case for derivation reduction (Theorem 4). Fourth, we describe the reduction algorithms and discuss their computational complexity. Finally, we corroborate the experimental results of Cropper and Tourret on Michalski's train problem (Cropper and Tourret 2018) and provide additional experimental results on two more domains: real-world string transformations and inducing Datalog game rules from observations.
Theory minimisation
We focus on removing clauses from a clausal theory. A related yet distinct topic is theory minimisation where the goal is to find a minimum equivalent formula to a given input formula. This topic is often studied in propositional logic (Hemaspaandra and Schnoor 2011). The minimisation problem allows for the introduction of new clauses. By contrast, the reduction problem studied in this paper does not allow for the introduction of new clauses and instead only allows for the removal of redundant clauses.
Prime implicates
Implicates of a theory T are the clauses that are entailed by T and are called prime when they do not themselves entail other implicates of T. This notion differs from the subsumption and derivation reduction because it focuses on entailment, and it differs from entailment reduction because (1) the notion of a prime implicate has been studied only in propositional, first-order, and some modal logics (Bienvenu 2007; Echenim et al. 2015; Marquis 2000); (2) the generation of prime implicates allows for the introduction of new clauses in the formula.
Logical reduction
We now introduce the reduction problem: the problem of finding redundant clauses in a theory. We first describe the reduction problem starting with preliminaries, and then describe three instances of the problem. The first two instances are based on existing logical reduction methods: subsumption and entailment. The third instance is a new form of reduction introduced in Cropper and Tourret (2018) based on SLD-derivations.
We assume familiarity with logic programming notation (Lloyd 1987) but we restate some key terminology. A clause is a disjunction of literals. A clausal theory is a set of clauses. A Horn clause is a clause with at most one positive literal. A Horn theory is a set of Horn clauses. A definite clause is a Horn clause with exactly one positive literal. A Horn clause is a Datalog clause if (1) it contains no function symbols, and (2) every variable that appears in the head of the clause also appears in a positive (i.e. not negated) literal in the body of the clause.Footnote 7 We denote the powerset of the set S as \(2^S\).
Although the reduction problem applies to any clausal theory, we focus on theories formed of metarules:
Definition 1
(Metarule) A metarule is a second-order Horn clause of the form:
$$\begin{aligned} A_0 \leftarrow A_1, \; \dots \;, \; \; A_m \end{aligned}$$
where each \(A_i\) is a literal of the form \(P(T_1,\dots ,T_n )\) where P is either a predicate symbol or a second-order variable that can be substituted by a predicate symbol, and each \(T_i\) is either a constant symbol or a first-order variable that can be substituted by a constant symbol.
Table 1 Example metarules
Table 1 shows a selection of metarules commonly used in the MIL literature (Cropper and Muggleton 2015, 2016a, 2019; Cropper et al. 2015; Morel et al. 2019). As Definition 1 states, metarules may include predicate and constant symbols. However, we focus on the more general case where metarules only contain variables.Footnote 8 In addition, although metarules can be any Horn clauses, we focus on definite clauses with at least one body literal, i.e. we disallow facts, because their inclusion leads to uninteresting reductions, where in almost all such cases the theories can be reduced to a single fact.Footnote 9 We denote the infinite set of all such metarules as \({{{\mathscr {M}}}}^{{}}_{}\). We focus on fragments of \({{{\mathscr {M}}}}^{{}}_{}\), where a fragment is a syntactically restricted subset of a theory (Bradley and Manna 2007):
(The fragment\({{{\mathscr {M}}}}^{{a}}_{m}\)) We denote as \({{{\mathscr {M}}}}^{{a}}_{m}\) the fragment of \({{{\mathscr {M}}}}^{{}}_{}\) where each literal has arity at most a and each clause has at most m literals in the body. We replace a by the explicit set of arities when we restrict the allowed arities further.
\({{{\mathscr {M}}}}^{{\{2\}}}_{2}\) is a subset of \({{{\mathscr {M}}}}^{{}}_{}\) where each predicate has arity 2 and each clause has at most 2 body literals.
\({{{\mathscr {M}}}}^{{\{2\}}}_{m}\) is a subset of \({{{\mathscr {M}}}}^{{}}_{}\) where each predicate has arity 2 and each clause has at most m body literals.
\({{{\mathscr {M}}}}^{{\{0,2\}}}_{m}\) is a subset of \({{{\mathscr {M}}}}^{{}}_{}\) where each predicate has arity 0 or 2 and each clause has at most m body literals.
\({{{\mathscr {M}}}}^{{a}}_{\{1,2\}}\) is a subset of \({{{\mathscr {M}}}}^{{}}_{}\) where each predicate has arity at most a and each clause has either 1 or 2 body literals.
Let T be a clausal theory. Then we say that T is in the fragment \({{{\mathscr {M}}}}^{{a}}_{m}\) if and only if each clause in T is in \({{{\mathscr {M}}}}^{{a}}_{m}\).
In Sect. 6 we conduct experiments to see whether using reduced sets of metarules can improve learning performance. The primary purpose of the experiments is to test our claim that entailment reduction is not always the most appropriate form of reduction. Our experiments focus on MIL. For self-containment, we briefly describe MIL.
(MIL input) An MIL input is a tuple \((B,E^+,E^-,M)\) where:
B is a set of Horn clauses denoting background knowledge
\(E^+\) and \(E^-\) are disjoint sets of ground atoms representing positive and negative examples respectively
M is a set of metarules
The MIL problem is defined from a MIL input:
(MIL problem) Given a MIL input \((B,E^+,E^-,M)\), the MIL problem is to return a logic program hypothesis H such that:
\(\forall c \in H, \exists m \in M\) such that \(c=m\theta \), where \(\theta \) is a substitution that grounds all the existentially quantified variables in m
\(H \cup B \models E^{+}\)
\(H \cup B \not \models E^{-}\)
We call H a solution to the MIL problem.
The metarules and background define the hypothesis space. To explain our experimental results in Sect. 6, it is important to understand the effect that metarules have on the size of the MIL hypothesis space, and thus on learning performance. The following result generalises previous results (Cropper and Muggleton 2016a; Lin et al. 2014):
Theorem 1
(MIL hypothesis space) Given p predicate symbols and k metarules in \({{{\mathscr {M}}}}^{{a}}_{m}\), the number of programs expressible with n clauses is at most \((p^{m+1}k)^n\).
The number of first-order clauses which can be constructed from a \({{{\mathscr {M}}}}^{{a}}_{m}\) metarule given p predicate symbols is at most \(p^{m+1}\) because for a given metarule there are at most \(m+1\) predicate variables with at most \(p^{m+1}\) possible substitutions. Therefore the set of such clauses S which can be formed from k distinct metarules in \({{{\mathscr {M}}}}^{{a}}_{m}\) using p predicate symbols has cardinality at most \(p^{m+1}k\). It follows that the number of programs which can be formed from a selection of n clauses chosen from S is at most \((p^{m+1}k)^n\). \(\square \)
Theorem 1 shows that the MIL hypothesis space increases given more metarules. The Blumer bound (Blumer et al. 1987),Footnote 10 says that given two hypothesis spaces, searching the smaller space will result in fewer errors compared to the larger space, assuming that the target hypothesis is in both spaces. This result suggests that we should consider removing redundant metarules to improve the learning performance. We explore this idea in the rest of the paper.
To reason about metarules (especially when running the Prolog implementations of the reduction algorithms), we use a method called encapsulation (Cropper and Muggleton 2014) to transform a second-order logic program to a first-order logic program. We first define encapsulation for atoms:
(Atomic encapsulation) Let A be a second-order or first-order atom of the form \(P(T_{1},..,T_{n})\). Then \(enc(A) = enc(P,T_{1},..,T_{n})\) is the encapsulation of A.
For instance, the encapsulation of the atom parent(ann,andy) is enc(parent,ann,andy). Note that encapsulation essentially ignores the quantification of variables in metarules by treating all variables, including predicate variables, as first-order universally quantified variables of the first-order enc predicate. In particular, replacing existential quantifiers with universal quantifiers on predicate variables is fine for our work because we only reason about the form of metarules, not their semantics, i.e. we treat metarules as templates for first-order clauses. We extend atomic encapsulation to logic programs:
(Program encapsulation) The logic program enc(P) is the encapsulation of the logic program P in the case enc(P) is formed by replacing all atoms A in P by enc(A).
For example, the encapsulation of the metarule \(P(A,B) \leftarrow Q(A,C), R(C,B)\) is \(enc(P,A,B) \leftarrow enc(Q,A,C), enc(R,C,B)\). We extend encapsulation to interpretations (Nienhuys-Cheng and de Wolf 1997) of logic programs:
(Interpretation encapsulation) Let I be an interpretation over the predicate and constant symbols in a logic program. Then the encapsulated interpretation enc(I) is formed by replacing each atom A in I by enc(A).
We now have the proposition:
[Encapsulation models (Cropper and Muggleton 2014)] The second-order logic program P has a model M if and only if enc(P) has the model enc(M).
Follows trivially from the definitions of encapsulated programs and interpretations.
\(\square \)
We can extend the definition of entailment to logic programs:
[Entailment (Cropper and Muggleton 2014)] Let P and Q be second-order logic programs. Then \(P\models Q\) if and only if every model enc(M) of enc(P) is also a model of enc(Q).
Follows immediately from Proposition 1. \(\square \)
These results allow us to reason about metarules using standard first-order logic. In the rest of the paper all the reasoning about second-order theories is performed at the first-order level. However, to aid the readability we continue to write non-encapsulated metarules in the rest of the paper, i.e. we will continue to refer to sets of metarules as second-order theories.
Logical reduction problem
We now describe the logical reduction problem. For the clarity of the paper, and to avoid repeating definitions for each form of reduction that we consider (entailment, subsumption, and derivability), we describe a general reduction problem which is parametrised by a binary relation \(\sqsubset \) defined over any clausal theory, although in the case of derivability, \(\sqsubset \) is in fact only defined over Horn clauses. Our only constraint on the relation \(\sqsubset \) is that if \(A\sqsubset {}B\), \(A\subseteq A'\) and \(B'\subseteq B\) then \(A'\sqsubset {}B'\). We first define a redundant clause:
(\(\sqsubset \)-redundant clause) The clause C is \(\sqsubset \)-redundant in the clausal theory \(T \cup \{C\}\) whenever T\(\sqsubset \)\(\{C\}\).
In a slight abuse of notation, we allow Definition 8 to also refer to a single clause, i.e. in our notation T\(\sqsubset \)C is the same as T\(\sqsubset \)\(\{C\}\). We define a reduced theory:
(\(\sqsubset \)-reduced theory) A clausal theory is \(\sqsubset \)-reduced if and only if it is finite and it does not contain any \(\sqsubset \)-redundant clauses.
We define the input to the reduction problem:
Definition 10
(\(\sqsubset \)-reduction input) A reduction input is a pair (T, \(\sqsubset \)) where T is a clausal theory and \(\sqsubset \) is a binary relation over a clausal theory.
Note that a reduction input may (and often will) be an infinite clausal theory. We define the reduction problem:
(\(\sqsubset \)-reduction problem) Let (T, \(\sqsubset \)) be a reduction input. Then the \(\sqsubset \)-reduction problem is to find a finite theory \(T' \subseteq T\) such that (1) \(T'\)\(\sqsubset \)T (i.e. \(T'\)\(\sqsubset \)C for every clause C in T), and (2) \(T'\) is \(\sqsubset \)-reduced. We call \(T'\) a \(\sqsubset \)-reduction.
Although the input to a \(\sqsubset \)-reduction problem may contain an infinite theory, the output (a \(\sqsubset \)-reduction) must be a finite theory. We also introduce a variant of the \(\sqsubset \)-reduction problem where the reduction must obey certain syntactic restrictions:
(\({{{\mathscr {M}}}}^{{a}}_{m}\)-\(\sqsubset \)-reduction problem) Let (T,\(\sqsubset \),\({{{\mathscr {M}}}}^{{a}}_{m}\)) be a triple, where the first two elements are as in a standard reduction input and \({{{\mathscr {M}}}}^{{a}}_{m}\) is a target reduction theory. Then the \({{{\mathscr {M}}}}^{{a}}_{m}\)-\(\sqsubset \)-reduction problem is to find a finite theory \(T' \subseteq T\) such that (1) \(T'\) is a \(\sqsubset \)-reduction of T, and (2) \(T'\) is in \({{{\mathscr {M}}}}^{{a}}_{m}\).
Subsumption reduction
The first form of reduction we consider is based on subsumption, which, as discussed in Sect. 2, is often used to eliminate redundancy in a clausal theory:
(Subsumption) A clause C subsumes a clause D, denoted as \(C \preceq D\), if there exists a substitution \(\theta \) such that \(C\theta \subseteq D\).
Note that if a clause C subsumes a clause D then \(C \models D\) (Robinson 1965). However, if \(C \models D\) then it does not necessarily follow that \(C \preceq D\). Subsumption can therefore be seen as being weaker than entailment. Whereas checking entailment between clauses is undecidable (Church 1936), Robinson (1965) showed that checking subsumption between clauses is decidable [although in general deciding subsumption is a NP-complete problem (Nienhuys-Cheng and de Wolf 1997)].
If T is a clausal theory then the pair \((T,\preceq )\) is an input to the \(\sqsubset \)-reduction problem, which leads to the subsumption reduction problem (S-reduction problem). We show that the S-reduction problem is decidable for finite theories:
(Finite S-reduction problem decidability) Let T be a finite theory. Then the corresponding S-reduction problem is decidable.
We can enumerate each element \(T'\) of \(2^T\) in ascending order on the cardinality of \(T'\). For each \(T'\) we can check whether \(T'\) subsumes T, which is decidable because subsumption between clauses is decidable. If \(T'\) subsumes T then we correctly return \(T'\); otherwise we continue to enumerate. Because the set \(2^T\) is finite the enumeration must halt. Because the set \(2^T\) contains T the algorithm will in the worst-case return T. Thus the problem is decidable. \(\square \)
Entailment reduction
As mentioned in the introduction, Cropper and Muggleton (2014) previously used entailment reduction (Muggleton 1995) to reduce sets of metarules using the notion of an entailment redundant clause:
(E-redundant clause) The clause C is entailment redundant (E-redundant) in the clausal theory \(T \cup \{C\}\) whenever \(T\models C\).
If T is a clausal theory then the pair \((T,\models )\) is an input to the \(\sqsubset \)-reduction problem, which leads to the entailment reduction problem (E-reduction). We show the relationship between an E- and a S-reduction:
Let T be a clausal theory, \(T_S\) be a S-reduction of T, and \(T_E\) be an E-reduction of T. Then \(T_E \models T_S\).
Assume the opposite, i.e. \(T_E \not \models T_S\). This assumption implies that there is a clause \(C \in T_S\) such that \(T_E \not \models C\). By the definition of S-reduction, \(T_S\) is a subset of T so C must be in T, which implies that \(T_E \not \models T\). But this contradicts the premise that \(T_E\) is an E-reduction of T. Therefore the assumption cannot hold, and thus \(T_E \models T_S\). \(\square \)
We show that the E-reduction problem is undecidable for arbitrary clausal theories:
(E-reduction problem clausal decidability) The E-reduction problem for clausal theories is undecidable.
Follows from the undecidability of entailment in clausal logic (Church 1936). \(\square \)
The E-reduction problem for Horn theories is also undecidable:
(E-reduction problem Horn decidability) The E-reduction problem for Horn theories is undecidable.
Follows from the undecidability of entailment in Horn logic (Marcinkowski and Pacholski 1992). \(\square \)
The E-reduction problem is, however, decidable for finite Datalog theories:
(E-reduction problem Datalog decidability) The E-reduction problem for finite Datalog theories is decidable.
Follows from the decidability of entailment in Datalog (Dantsin et al. 2001) using a similar algorithm to the one used in the proof of Proposition 3. \(\square \)
Derivation reduction
As mentioned in the introduction, entailment reduction can be too strong a form of reduction. We therefore describe a new form of reduction based on derivability (Cropper and Tourret 2018; Tourret and Cropper 2019). Although our notion of derivation reduction can be defined for any proof system [such as unconstrained resolution as is done in Tourret and Cropper (2019)] we focus on SLD-resolution because we want to reduce sets of metarules, which are definite clauses. We define the function \(R^n(T)\) of a Horn theory T as:
$$\begin{aligned} \begin{aligned}&R^0(T) = T \\&R^n(T) = \{C | C_1 \in R^{n-1}(T),C_2 \in T,C \, \hbox {is the binary resolvent of} \, C_1 \, \hbox {and} \, C_2\} \end{aligned} \end{aligned}$$
We use this function to define the Horn closure of a Horn theory:
(Horn closure) The Horn closure \(R^*(T)\) of a Horn theory T is:
$$\begin{aligned} \bigcup \limits _{n\in {\mathbb {N}}}R^n(T) \end{aligned}$$
We state our notion of derivability:
(Derivability) A Horn clause C is derivable from the Horn theory T, written \(T \vdash C\), if and only if \(C \in R^*(T)\).
We define a derivationally redundant (D-redundant) clause:
(D-redundant clause) A clause C is derivationally redundant in the Horn theory \(T \cup \{C\}\) if \(T \vdash C\).
Let T be a Horn theory, then the pair \((T,\vdash )\) is an input to the \(\sqsubset \)-reduction problem, which leads to the derivation reduction problem (D-reduction problem). Note that a theory can have multiple D-reductions. For instance, consider the theory T:
$$\begin{aligned} \begin{aligned}&C_1 = P(A,B) \leftarrow Q(B,A) \\&C_2 = P(A,B) \leftarrow Q(A,C),R(C,B) \\&C_3 = P(A,B) \leftarrow Q(C,A),R(C,B) \end{aligned} \end{aligned}$$
One D-reduction of T is \(\{C_1,C_2\}\) because we can resolve the first body literal of \(C_2\) with \(C_1\) to derive \(C_3\) (up to variable renaming). Another D-reduction of T is \(\{C_1,C_3\}\) because we can likewise resolve the first body literal of \(C_3\) with \(C_1\) to derive \(C_2\).
We can show the relationship between E- and D-reductions by restating the notion of a SLD-deduction (Nienhuys-Cheng and de Wolf 1997):
[SLD-deduction (Nienhuys-Cheng and de Wolf 1997)] Let T be a Horn theory and C be a Horn clause. Then there exists a SLD-deduction of C from T, written \(T \vdash _d C\), if C is a tautology or if there exists a clause D such that \(T \vdash D\) and D subsumes C.
We can use the subsumption theorem (Nienhuys-Cheng and de Wolf 1997) to show the relationship between SLD-deductions and logical entailment:
[SLD-subsumption theorem (Nienhuys-Cheng and de Wolf 1997)] Let T be a Horn theory and C be a Horn clause. Then \(T \models C\) if and only if \(T \vdash _d C\).
We can use this result to show the relationship between an E- and a D-reduction:
Let T be a Horn theory, \(T_E\) be an E-reduction of T, and \(T_D\) be a D-reduction of T. Then \(T_E \models T_D\).
Follows from the definitions of E-reduction and D-reduction because an E-reduction can be obtained from a D-reduction with an additional subsumption check. \(\square \)
We also use the SLD-subsumption theorem to show that the D-reduction problem is undecidable for Horn theories:
(D-reduction problem Horn decidability) The D-reduction problem for Horn theories is undecidable.
Assume the opposite, that the problem is decidable, which implies that \(T \vdash C\) is decidable. Since \(T \vdash C\) is decidable and subsumption between Horn clauses is decidable (Garey and Johnson 1979), then finding a SLD-deduction is also decidable. Therefore, by the SLD-subsumption theorem, entailment between Horn clauses is decidable. However, entailment between Horn clauses is undecidable (Schmidt-Schauß 1988), so the assumption cannot hold. Therefore, the problem must be undecidable. \(\square \)
However, the D-reduction problem is decidable for any fragment \({{{\mathscr {M}}}}^{{a}}_{m}\) (e.g. definite Datalog clauses where each clause has at least one body literal, with additional arity and body size constraints). To show this result, we first introduce two lemmas:
Lemma 1
Let D, \(C_1\), and \(C_2\) be definite clauses with \(m_d\), \(m_{c1}\), and \(m_{c2}\) body literals respectively, where \(m_d\), \(m_{c1}\), and \(m_{c2} > 0\). If \(\{C_1,C_2\} \vdash D\) then \(m_{c1} \le m_{d}\) and \(m_{c2} \le m_{d}\).
Follows from the definitions of SLD-resolution (Nienhuys-Cheng and de Wolf 1997). \(\square \)
Note that Lemma 1 does not hold for unconstrained resolution because it allows for factorisation (Nienhuys-Cheng and de Wolf 1997). Lemma 1 also does not hold when facts (bodyless definite clauses) are allowed because they would allow for resolvents that are smaller in body size than one of the original two clauses.
Let \({{{\mathscr {M}}}}^{{a}}_{m}\) be a fragment of metarules. Then \({{{\mathscr {M}}}}^{{a}}_{m}\) is finite up to variable renaming.
Any literal in \({{{\mathscr {M}}}}^{{a}}_{m}\) has at most a first-order variables and 1 second-order variable, so any literal has at most \(a+1\) variables. Any metarule has at most m body literals plus the head literal, so any metarule has at most \(m+1\) literals. Therefore, any metarule has at most \(((a+1)(m+1))\) variables. We can arrange the variables in at most \(((a+1)(m+1))!\) ways, so there are at most \(((a+1)(m+1))!\) metarules in \({{{\mathscr {M}}}}^{{a}}_{m}\) up to variable renaming. Thus \({{{\mathscr {M}}}}^{{a}}_{m}\) is finite up to variable renaming. \(\square \)
Note that the bound in the proof of Lemma 2 is a worst-case result. In practice there are fewer usable metarules because we consider fragments of constrained theories, thus not all clauses are admissible, and in all cases the order of the body literals is irrelevant. We use these two lemmas to show that the D-reduction problem is decidable for \({{{\mathscr {M}}}}^{{a}}_{m}\):
(\({{{\mathscr {M}}}}^{{a}}_{m}\)-D-reduction problem decidability) The D-reduction problem for theories included in \({{{\mathscr {M}}}}^{{a}}_{m}\) is decidable.
Let T be a finite clausal theory in \({{{\mathscr {M}}}}^{{a}}_{m}\) and C be a definite clause with \(n>0\) body literals. The problem is whether \(T \vdash C\) is decidable. By Lemma 1, we cannot derive C from any clause which has more than n body literals. We can therefore restrict the resolution closure \(R^*(T)\) to only include clauses with body lengths less than or equal to n. In addition, by Lemma 2 there are only a finite number of such clauses so we can compute the fixed-point of \(R^*(T)\) restricted to clauses of size smaller or equal to n in a finite amount of steps and check whether C is in the set. If it is then \(T \vdash C\); otherwise \(T \not \vdash C\). \(\square \)
k-Derivable clauses
Propositions 3 and 7 and Theorem 4 show that the \(\sqsubset \)-reduction problem is decidable under certain conditions. However, as we will shown in Sect. 4, even in decidable cases, solving the \(\sqsubset \)-reduction problem is computationally expensive. We therefore solve restricted k-bounded versions of the E- and D-reduction problems, which both rely on SLD-derivations. Specifically, we focus on resolution depth-limited derivations using the notion of k-derivability:
(k-derivability) Let k be a natural number. Then a Horn clause C is k-derivable from the Horn theory T, written \(T \vdash _k C\), if and only if \(C \in R^k(T)\).
The definitions for k-bounded E- and D-reductions follow from this definition but are omitted for brevity. In Sect. 4 we introduce a general algorithm (Algorithm 1) to solve the S-reduction problem and k-bounded E- and D-reduction problems.
Reduction algorithms
In Sect. 5 we logically reduce sets of metarules. We now describe the reduction algorithms that we use.
\(\sqsubset \)-Reduction algorithm
The reduce algorithm (Algorithm 1) shows a general \(\sqsubset \)-reduction algorithm that solves the \(\sqsubset \)-reduction problem (Definition 11) when the input theory is finite.Footnote 11 We ignore cases where the input is infinite because of the inherent undecidability of the problem. Algorithm 1 is largely based on Plotkin's clausal reduction algorithm (Plotkin 1971). Given a finite clausal theory T and a binary relation \(\sqsubset \), the algorithm repeatedly tries to remove a \(\sqsubset \)-redundant clause in T. If it cannot find a \(\sqsubset \)-redundant clause, then it returns the \(\sqsubset \)-reduced theory. Note that since derivation reduction is only defined over Horn theories, in a \(\vdash \)-reduction input \((T,\vdash )\), the theory T has to be Horn. We show total correctness of the algorithm:
(Algorithm 1 total correctness) Let (T,\(\sqsubset \)) be a \(\sqsubset \)-reduction input where T is finite. Let the corresponding \(\sqsubset \)-reduction problem be decidable. Then Algorithm 1 solves the \(\sqsubset \)-reduction problem.
Trivial by induction on the size of T. \(\square \)
Note that Proposition 9 assumes that the given reduction problem is decidable and that the input theory is finite. If you call Algorithm 1 with an arbitrary clausal theory and the \(\models \) relation then it will not necessarily terminate. We can call Algorithm 1 with specific binary relations, where each variation has a different time-complexity. Table 2 shows different ways of calling Algorithm 1 with their corresponding time complexities, where we assume finite theories as input. We show the complexity of calling Algorithm 1 with the subsumption relation:
(S-reduction complexity) If T is a finite clausal theory then calling Algorithm 1 with (T,\(\preceq \)) requires at most \(O(|T|^3)\) calls to a subsumption algorithm.
For every clause in T the algorithm checks whether any other clause in T subsumes C which requires at most \(O(|T|^2)\) calls to a subsumption algorithm. If any clause C is found to be S-redundant then the algorithm repeats the procedure on the theory (\(T \setminus \{C\}\)), so overall the algorithm requires at most \(O(|T|^3)\) calls to a subsumption algorithm. \(\square \)
Note that a more detailed analysis of calling Algorithm 1 with the subsumption relation would depend on the subsumption algorithm used, which is an NP-complete problem (Garey and Johnson 1979). We show the complexity of calling Algorithm 1 with the k-bounded entailment relation:
(k-bounded E-reduction complexity) If T is a finite Horn theory and k is a natural number then calling Algorithm 1 with (T,\(\models _k\)) requires at most \(O(|T|^{k+2})\) resolutions.
In the worst case the derivation check (line 4) requires searching the whole SLD-tree which has a maximum branching factor |T| and a maximum depth k and takes \(O(|T|^{k})\) steps. The algorithm potentially does this step for every clause in T so the complexity of this step is \(O(|T|^{k+1})\). The algorithm has to perform this check for every clause in T with an overall worst-case complexity \(O(|T|^{k+2})\). \(\square \)
The complexity of calling Algorithm 1 with the k-derivation relation is identical:
(k-bounded D-reduction complexity) Let T be a finite Horn theory and k be a natural number then calling Algorithm 1 with (T,\(\vdash _k\)) requires at most \(O(|T|^{k+2})\) resolutions.
Follows using the same reasoning as Proposition 11. \(\square \)
Table 2 Outputs and complexity of Algorithm 1 for different input relations and an arbitrary finite clausal theory T
\({{{\mathscr {M}}}}^{{a}}_{m}\)-\(\sqsubset \)-reduction algorithm
Although Algorithm 1 solves the \(\sqsubset \)-reduction problem, it does not solve the \({{{\mathscr {M}}}}^{{a}}_{m}\)-reduction problem (Definition 12). For instance, suppose you have the following theory T in \({{{\mathscr {M}}}}^{{2}}_{4}\):
$$\begin{aligned} \begin{aligned}&M_1 = P(A,B) \leftarrow Q(B,A) \\&M_2 = P(A,B) \leftarrow Q(A,A),R(B,B) \\&M_3 = P(A,B) \leftarrow Q(A,C),R(B,C) \\&M_4 = P(A,B) \leftarrow Q(B,C),R(A,D),S(A,D),T(B,C) \end{aligned} \end{aligned}$$
Suppose you want to know whether T can be E-reduced to \({{{\mathscr {M}}}}^{{2}}_{2}\). Then calling Algorithm 1 with (\(T,\models \)) (i.e. the entailment relation) will return \(T' = \{M_1,M_4\}\) because: \(M_4 \models M_2\),Footnote 12\(M_4 \models M_3\),Footnote 13 and \(\{M_1,M_4\}\) cannot be further E-reduced.
Although \(T'\) is an E-reduction of T, it is not in \({{{\mathscr {M}}}}^{{2}}_{2}\) because \(M_4\) is not in \({{{\mathscr {M}}}}^{{2}}_{2}\). However, the theory T can be \({{{\mathscr {M}}}}^{{2}}_{2}\)-E-reduced to \(\{M_1,M_2,M_3\}\) because \(\{M_2,M_3\} \models M_4\),Footnote 14 and \(\{M_1,M_2,M_3\}\) cannot be further reduced. In general, let T be a theory in \({{{\mathscr {M}}}}^{{a}}_{m}\) and an \(T'\) be an E-reduction of T, then \(T'\) is not necessarily in \({{{\mathscr {M}}}}^{{a}}_{2}\).
Algorithm 2 overcomes this limitation of Algorithm 1. Given a finite clausal theory T, a binary relation \(\sqsubset \), and a reduction fragment \({{{\mathscr {M}}}}^{{a}}_{m}\), Algorithm 2 determines whether there is a \(\sqsubset \)-reduction of T in \({{{\mathscr {M}}}}^{{a}}_{m}\). If there is, it returns the reduced theory; otherwise it returns false. In other words, Algorithm 2 solves the \({{{\mathscr {M}}}}^{{a}}_{m}\)-\(\sqsubset \)-reduction problem. We show total correctness of Algorithm 2:
(Algorithm 2 correctness) Let (T,\(\sqsubset \),\({{{\mathscr {M}}}}^{{a}}_{m}\)) be a \({{{\mathscr {M}}}}^{{a}}_{m}\)-\(\sqsubset \)-reduction input. If the corresponding \(\sqsubset \)-reduction problem is decidable then Algorithm 2 solves the corresponding \({{{\mathscr {M}}}}^{{a}}_{m}\)-\(\sqsubset \)-reduction problem.
Sketch Proof
We provide a sketch proof for brevity. We need to show that the function aux correctly determines whether B\(\sqsubset \)T, which we can show by induction on the size of T. Assuming aux is correct, then if T can be reduced to B, the mreduce function calls Algorithm 1 to reduce B, which is correct by Proposition 9. Otherwise it returns false. \(\square \)
Reduction of metarules
We now logically reduce fragments of metarules. Given a fragment \({{{\mathscr {M}}}}^{{a}}_{m}\) and a reduction operator \(\sqsubset \), we have three main goals:
G1::
identify a \({{{\mathscr {M}}}}^{{a}}_{k}\)-\(\sqsubset \)-reduction of \({{{\mathscr {M}}}}^{{a}}_{m}\) for some k as small as possible
determine whether \({{{\mathscr {M}}}}^{{a}}_{2} \sqsubset {} {{{\mathscr {M}}}}^{{a}}_{\infty }\)
determine whether \({{{\mathscr {M}}}}^{{a}}_{\infty }\) has any (finite) \(\sqsubset \)-reduction
We work on these goals for fragments of \({{{\mathscr {M}}}}^{{a}}_{m}\) relevant to ILP. Table 3 shows the four fragments and their main restrictions. The subsequent sections precisely describe the fragments.
Our first goal (G1) is to essentially minimise the number of body literals in a set of metarules, which can be seen as trying to enforce an Occamist bias. We are particularly interested reducing sets of metarules to fragments with at most two body literals because \({{{\mathscr {M}}}}^{{\{2\}}}_{2}\) augmented with one function symbol has universal Turing machine expressivity (Tärnlund 1977). In addition, previous work on MIL has almost exclusively used metarules from the fragment \({{{\mathscr {M}}}}^{{2}}_{2}\). Our second goal (G2) is more general and concerns reducing an infinite set of metarules to \({{{\mathscr {M}}}}^{{a}}_{2}\). Our third goal (G3) is similar, but is about determining whether an infinite set of metarules has any finite reduction.
We work on the goals by first applying the reduction algorithms described in the previous section to finite fragments restricted to 5 body literals (i.e. \({{{\mathscr {M}}}}^{{a}}_{5}\)). This value gives us a sufficiently large set of metarules to reduce but not too large that the reduction problem is intractable. When running the E- and D-reduction algorithms (both k-bounded), we use a resolution-depth bound of 7, which is the largest value for which the algorithms terminate in reasonable time.Footnote 15 After applying the reduction algorithms to the finite fragments, we then try to solve G2 by extrapolating the results to the infinite case (i.e. \({{{\mathscr {M}}}}^{{a}}_{\infty }\)). In cases where , we then try to solve G3 by seeing whether there exists any natural number k such that \({{{\mathscr {M}}}}^{{a}}_{k} \sqsubset {} {{{\mathscr {M}}}}^{{a}}_{\infty }\).
Table 3 The four main fragments of \({{{\mathscr {M}}}}^{{}}_{}\) that we consider
Connected (\({{{\mathscr {C}}}}^{a}_{m}\)) results
We first consider a general fragment of metarules. The only constraint is that we follow the standard ILP convention (Cropper and Muggleton 2014; Evans and Grefenstette 2018; Gottlob et al. 1997; Nienhuys-Cheng and de Wolf 1997) and focus on connected clausesFootnote 16:
(Connected clause) A clause is connected if the literals in the clause cannot be partitioned into two sets such that the variables appearing in the literals of one set are disjoint from the variables appearing in the literals of the other set.
The following clauses are all connected:
$$\begin{aligned} \begin{aligned}&P(A) \leftarrow Q(A) \\&P(A,B) \leftarrow Q(A,C) \\&P(A,B) \leftarrow Q(A,B),R(B,D),S(D,B) \end{aligned} \end{aligned}$$
By contrast, these clauses are not connected:
$$\begin{aligned} \begin{aligned}&P(A) \leftarrow Q(B) \\&P(A,B) \leftarrow Q(A),R(C) \\&P(A,B) \leftarrow Q(A,B),S(C) \end{aligned} \end{aligned}$$
We denote the connected fragment of \({{{\mathscr {M}}}}^{{a}}_{m}\) as \({{{\mathscr {C}}}}^{a}_{m}\). Table 4 shows the maximum body size and the cardinality of the reductions obtained when applying the reduction algorithms to \({{{\mathscr {C}}}}^{a}_{5}\) for different values of a. To give an idea of the scale of the reductions, the fragment \({{{\mathscr {C}}}}^{\{1,2\}}_{5}\) contains 77398 unique metarules, of which E-reduction removed all but two of them. Table 5 shows the actual reductions for \({{{\mathscr {C}}}}^{\{1,2\}}_{5}\). Reductions for other connected fragments are in Appendix "A.1".
Table 4 Cardinality and maximal body size of the reductions of \({{{\mathscr {C}}}}^{a}_{5}\)
Table 5 Reductions of the connected fragment \({{{\mathscr {C}}}}^{\{1,2\}}_{5}\)
As Table 4 shows, all the fragments can be S- and E-reduced to \({{{\mathscr {C}}}}^{a}_{1}\). We show that in general \({{{\mathscr {C}}}}^{a}_{\infty }\) has a \({{{\mathscr {C}}}}^{a}_{1}\)-S-reduction:
(\({{{\mathscr {C}}}}^{a}_{\infty }\) S-reducibility) For all \(a>0\), the fragment \({{{\mathscr {C}}}}^{a}_{\infty }\) has a \({{{\mathscr {C}}}}^{a}_{1}\)-S-reduction.
Let C be any clause in \({{{\mathscr {C}}}}^{a}_{\infty }\), where \(a>0\). By the definition of connected clauses there must be at least one body literal in C that shares a variable with the head literal of C. The clause formed of the head of C with the body literal directly connected to it is by definition in \({{{\mathscr {C}}}}^{a}_{1}\) and clearly subsumes C. Therefore \({{{\mathscr {C}}}}^{a}_{1} \preceq {{{\mathscr {C}}}}^{a}_{\infty }\). \(\square \)
We likewise show that \({{{\mathscr {C}}}}^{a}_{\infty }\) always has a \({{{\mathscr {C}}}}^{a}_{1}\)-E-reduction:
(\({{{\mathscr {C}}}}^{a}_{\infty }\) E-reducibility) For all \(a>0\), the fragment \({{{\mathscr {C}}}}^{a}_{\infty }\) has a \({{{\mathscr {C}}}}^{a}_{1}\)-E-reduction.
Follows from Theorem 5 and Proposition 4. \(\square \)
As Table 4 shows, the fragment \({{{\mathscr {C}}}}^{2}_{5}\) could not be D-reduced to \({{{\mathscr {C}}}}^{2}_{2}\) when running the derivation reduction algorithm. However, because we run the derivation reduction algorithm with a maximum derivation depth, this result alone is not enough to guarantee that the output cannot be further reduced. Therefore, we show that \({{{\mathscr {C}}}}^{2}_{5}\) cannot be D-reduced to \({{{\mathscr {C}}}}^{2}_{2}\):
(\({{{\mathscr {C}}}}^{2}_{5}\) D-irreducibility) The fragment \({{{\mathscr {C}}}}^{2}_{5}\) has no \({{{\mathscr {C}}}}^{2}_{2}\)-D-reduction.
We denote by \({{{\mathscr {P}}}}(C)\) the set of all clauses that can be obtained from a given clause C by permuting the arguments in its literals up to variable renaming. For example if \(C=P(A,B)\leftarrow Q(A,C)\) then \({{{\mathscr {P}}}}(C)=\{(C),(P(A,B)\leftarrow Q(C,A)),(P(B,A)\leftarrow Q(A,C)),(P(B,A)\leftarrow Q(C,A))\}\) up to variable renaming.
Let \(C_I\) denote the clause \(P(A,B) \leftarrow Q(A,C),R(A,D),S(B,C),T(B,D),U(C,D)\). We prove that no clause in \({{{\mathscr {P}}}}(C_I)\) can be derived from \({{{\mathscr {C}}}}^{2}_{2}\) by induction on the length of derivations. Formally, we show that there exist no derivations of length n from \({{{\mathscr {C}}}}^{2}_{2}\) to a clause in \({{{\mathscr {P}}}}(C_I)\). We reason by contradiction and w.l.o.g. we consider only the clause \(C_I\).
For the base case \(n=0\), assume that there is a derivation of length 0 from \({{{\mathscr {C}}}}^{2}_{2}\) to \(C_I\). This assumption implies that \(C_I\in {{{\mathscr {C}}}}^{2}_{2}\), but this clearly cannot hold given the body size of \(C_I\).
For the general case, assume that the property holds for all \(k<n\) and by contradiction consider the final inference in a derivation of length n of \(C_I\) from \({{{\mathscr {C}}}}^{2}_{2}\). Let \(C_1\) and \(C_2\) denote the premises of this inference. Then the literals occurring in \(C_I\) must occur up to variable renaming in at least one of \(C_1\) and \(C_2\). We consider the following cases separately.
All the literals of \(C_I\) occur in the same premise: because of Lemma 1, this case is impossible because this premise would contain more literals than \(C_I\) (the ones from \(C_I\) plus the resolved literal).
Only one of the literals of \(C_I\) occurs separately from the others: w.l.o.g., assume that the literal Q(A, C) occurs alone in \(C_2\) (up to variable renaming). Then \(C_2\) must be of the form \(H(A,C)\leftarrow Q(A,C)\) or \(H(C,A)\leftarrow Q(A,C)\) for some H, where the H-headed literal is the resolved literal of the inference that allows the unification of A and C with their counterparts in \(C_1\).Footnote 17 In this case, \(C_1\) belongs to \({{{\mathscr {P}}}}(C_I)\) and a derivation of \(C_1\) from \({{{\mathscr {C}}}}^{2}_{2}\) of length smaller than n exists as a strict subset of the derivation to \(C_I\) of length n. This contradicts the induction hypothesis, thus the assumed derivation of C cannot exist.
Otherwise, the split of the literals of \(C_I\) between \(C_1\) and \(C_2\) is always such that at least three variables must be unified during the inference. For example, consider the case where \(P(A,B) \leftarrow Q(A,C) \subset C_1\) and the set \(\{R(A',D),S(B',C'),T(B',D),U(C',D)\}\) occurs in the body of \(C_2\) (up to variable renaming). Then \(A'\), \(B'\) and \(C'\) must unify respectively with A, B and C for \(C_I\) to be derived (up to variable renaming). However the inference can at most unify two variable pairs since the resolved literal must be dyadic at most and thus this inference is impossible, a contradiction.
Thus \(C_I\) and all of \({{{\mathscr {P}}}}(C_I)\) cannot be derived from \({{{\mathscr {C}}}}^{2}_{2}\). Note that, since \({{{\mathscr {P}}}}(C_I)\) is also neither a subset of \({{{\mathscr {C}}}}^{2}_{3}\) nor of \({{{\mathscr {C}}}}^{2}_{4}\), this proof also shows that \({{{\mathscr {P}}}}(C_I)\) cannot be derived from \({{{\mathscr {C}}}}^{2}_{3}\) and from \({{{\mathscr {C}}}}^{2}_{4}\). \(\square \)
We generalise this result to \({{{\mathscr {C}}}}^{2}_{\infty }\):
(\({{{\mathscr {C}}}}^{2}_{\infty }\) D-irreducibility) The fragment \({{{\mathscr {C}}}}^{2}_{\infty }\) has no D-reduction.
It is enough to prove that \({{{\mathscr {C}}}}^{2}_{\infty }\) does not have a \({{{\mathscr {C}}}}^{2}_{m}\)-D-reduction for an arbitrary m because any D-reduced theory, being finite, admits a bound on the body size of the clauses it contains. Starting from \(C_I\) as defined in the proof of Proposition 14, apply the following transformation iteratively for k from 1 to m: replace the literals containing Q and R (i.e. at first Q(A, C) and R(A, D)) with the following set of literals \(Q(A,C_k)\), \(R(A,D_k)\), \(V_k(C_k,D_k)\), \(Q_k(C_k,C)\), \(R_k(D_k,D)\) where all variables and predicate variables labeled with k are new. Let the resulting clause be denoted \(C_{I_m}\). This clause is of body size \(3m+5\) and thus does not belong to \({{{\mathscr {C}}}}^{2}_{m}\). Moreover, for the same reason that \(C_I\) cannot be derived from any \({{{\mathscr {C}}}}^{2}_{m'}\) with \(m'<5\) (see the proof of Proposition 14) \(C_{I_m}\) cannot be derived from any \({{{\mathscr {C}}}}^{2}_{m'}\) with \(m'<3m+5\). In particular, \(C_{I_m}\) cannot be derived from \({{{\mathscr {C}}}}^{2}_{m}\). \(\square \)
Another way to generalise Proposition 14 is the following:
(\({{{\mathscr {C}}}}^{a}_{\infty }\) D-irreducibility) For \(a\ge 2\), the fragment \({{{\mathscr {C}}}}^{a}_{\infty }\) has no \({{{\mathscr {C}}}}^{a}_{a^2+a-2}\)-D-reduction.
Let \(C_a\) denote the clause
$$\begin{aligned} \begin{aligned} C_a = P(A_1,\dots ,A_a)\leftarrow&Q_{1,1}(A_1,B_{1,1},\dots ,B_{1,a-1})\dots Q_{1,a}(A_1,B_{a,1},\dots ,B_{a,a-1})\\&\dots \\&Q_{a,1}(A_a,B_{1,1},\dots ,B_{1,a-1})\dots Q_{a,a}(A_a,B_{a,1},\dots ,B_{a,a-1})\\&R_1(B_{1,1},\dots ,B_{a,1}),\dots ,R_{a-1}(B_{1,a-1},\dots ,B_{a,a-1}) \end{aligned} \end{aligned}$$
Note that for \(a = 2\), the clauses \(C_a\) and \(C_I\) from the proof of Proposition 14 coincide. In fact to show that \(C_a\) is irreducible for any a, it is enough to consider the proof of Proposition 14 where \(C_a\) is substituted to \(C_I\) and where the last case is generalised in the following way:
the split of the literals of \(C_a\) between \(C_1\) and \(C_2\) is always such that at least \(a+1\) variables must be unified during the inference, which is impossible since the resolved literal can at most hold a variables.
The reason this proof holds is that any subset of \(C_a\) contains at least \(a+1\) distinct variables. Since \(C_a\) is of body size \(a^2+a-1\), this counter-example proves that \({{{\mathscr {C}}}}^{a}_{\infty }\) has no \({{{\mathscr {C}}}}^{a}_{a^2+a-2}\)-D-reduction. \(\square \)
Note that this is enough to conclude that \({{{\mathscr {C}}}}^{a}_{\infty }\) cannot be reduced to \({{{\mathscr {C}}}}^{a}_{2}\) but it does not prove that \({{{\mathscr {C}}}}^{a}_{\infty }\) is not D-reducible.
Table 6 summarises our theoretical results from this section. Theorems 5 and 6 show that \({{{\mathscr {C}}}}^{a}_{\infty }\) can always be S- and E-reduced to \({{{\mathscr {C}}}}^{a}_{1}\) respectively. By contrast, Theorem 7 shows that \({{{\mathscr {C}}}}^{2}_{\infty }\) cannot be D-reduced to \({{{\mathscr {C}}}}^{2}_{2}\). In fact, Theorem 7 says that \({{{\mathscr {C}}}}^{2}_{\infty }\) has no D-reduction. Theorem 7 has direct (negative) implications for MIL systems such as Metagol and HEXMIL. We discuss these implications in more detail in Sect. 7.
Table 6 Existence of a S-, E- or D-reduction of \({{{\mathscr {C}}}}^{a}_{\infty }\) to \({{{\mathscr {C}}}}^{a}_{2}\)
Datalog (\({{{\mathscr {D}}}}^{a}_{m}\)) results
We now consider Datalog clauses, which are often used in ILP (Albarghouthi et al. 2017; Cropper and Muggleton 2016a; Evans and Grefenstette 2018; Kaminski et al. 2018; Muggleton et al. 2015; Si et al. 2018). The relevant Datalog restriction is that if a variable appears in the head of a clause then it must also appear in a body literal. If we look at the S-reductions of \({{{\mathscr {C}}}}^{\{1,2\}}_{5}\) in Table 5 then the clause \(P(A,B) \leftarrow Q(B)\) is not a Datalog clause because the variable A appears in the head but not in the body. We denote the Datalog fragment of \({{{\mathscr {C}}}}^{a}_{m}\) as \({{{\mathscr {D}}}}^{a}_{m}\). Table 7 shows the results of applying the reduction algorithms to \({{{\mathscr {D}}}}^{a}_{5}\) for different values of a. Table 8 shows the reductions for the fragment \({{{\mathscr {D}}}}^{\{1,2\}}_{5}\), which are used in Experiment 3 (Sect. 6.3) to induce Datalog game rules from observations. Reductions for other Datalog fragments are in Appendix "A.2".
Table 7 Cardinality and maximal body size of the reductions of \({{{\mathscr {D}}}}^{a}_{5}\)
Table 8 Reductions of the Datalog fragment \({{{\mathscr {D}}}}^{\{1,2\}}_{5}\)
We show that \({{{\mathscr {D}}}}^{2}_{\infty }\) can be S-reduced to \({{{\mathscr {D}}}}^{2}_{2}\):
(\({{{\mathscr {D}}}}^{2}_{\infty }\) S-reducibility) The fragment \({{{\mathscr {D}}}}^{2}_{\infty }\) has a \({{{\mathscr {D}}}}^{2}_{2}\)-S-reduction.
Follows using the same argument as in Theorem 5 but the reduction is to \({{{\mathscr {D}}}}^{2}_{2}\) instead of \({{{\mathscr {D}}}}^{2}_{1}\). This difference is due to the Datalog constraint that states: if a variable appears in the head it must also appear in the body. For clauses with dyadic heads, if the two head argument variables occur in two distinct body literals then the clause cannot be further reduced beyond \({{{\mathscr {D}}}}^{2}_{2}\). \(\square \)
We show how this result cannot be generalised to \({{{\mathscr {D}}}}^{a}_{\infty }\):
(\({{{\mathscr {D}}}}^{a}_{\infty }\) S-irreducibility) For \(a>0\), the fragment \({{{\mathscr {D}}}}^{a}_{\infty }\) does not have a \({{{\mathscr {D}}}}^{a}_{a-1}\)-S-reduction.
As a counter-example to a \({{{\mathscr {D}}}}^{a}_{a-1}\)-S-reduction, consider \(C_a=P(X_1,\dots ,X_a)\leftarrow Q_1(X_1), \, \dots ,Q_a(X_a)\). The clause \(C_a\) does not belong to \({{{\mathscr {D}}}}^{a}_{a-1}\) and cannot be S-reduced to it because the removal of any subset of its literals leaves argument variables in the head without their counterparts in the body. Hence, any subset of \(C_a\) does not belong to the Datalog fragment. Thus \(C_a\) cannot be subsumed by a clause in \({{{\mathscr {D}}}}^{a}_{a-1}\). \(\square \)
However, we can show that \({{{\mathscr {D}}}}^{a}_{\infty }\) can always be S-reduced to \({{{\mathscr {D}}}}^{a}_{a}\):
Theorem 10
(\({{{\mathscr {D}}}}^{a}_{\infty }\) to \({{{\mathscr {D}}}}^{a}_{a}\) S-reducibility) For \(a>0\), the fragment \({{{\mathscr {D}}}}^{a}_{\infty }\) has a \({{{\mathscr {D}}}}^{a}_{a}\)-S-reduction.
To prove that \({{{\mathscr {D}}}}^{a}_{\infty }\) has a \({{{\mathscr {D}}}}^{a}_{a}\)-S-reduction it is enough to remark that any clause in \({{{\mathscr {D}}}}^{a}_{\infty }\) has a subclause of body size at most a that is also in \({{{\mathscr {D}}}}^{a}_{\infty }\), the worst case being clauses such as \(C_a\) where all argument variables in the head occur in a distinct literal in the body. \(\square \)
We also show that \({{{\mathscr {D}}}}^{a}_{\infty }\) always has a \({{{\mathscr {D}}}}^{a}_{2}\)-E-reduction, starting with the following lemma:
For \(a>0\) and \(n\in \{1,\dots ,a\}\), the clause
$$\begin{aligned} P_0(A_1,A_2,\dots ,A_n) \leftarrow P_1(A_1), P_2(A_2), \dots , P_n(A_n) \end{aligned}$$
is \({{{\mathscr {D}}}}^{a}_{2}\)-E-reducible.
By induction on n.
For the base case \(n=2\), by definition \({{{\mathscr {D}}}}^{a}_{2}\) contains \(P_0(A_1,A_2) \leftarrow P_1(A_1), P_2(A_2)\)
For the inductive step, assume the claim holds for \(n-1\). We show it holds for n. By definition \({{{\mathscr {D}}}}^{a}_{2}\) contains the clause \(D_1{=}P(A_1,A_2,\dots ,A_{n}) \leftarrow P_0(A_1,A_2,\dots ,A_{n-1}), P_n(A_{n})\). By the inductive hypothesis, \(D_2=P_0(A_1,A_2,\dots ,A_{n-1})\leftarrow P_1(A_1),\dots ,P_{n-1}(A_{n-1})\) is \({{{\mathscr {D}}}}^{a-1}_{2}\)-E-reducible, and thus also \({{{\mathscr {D}}}}^{a}_{2}\)-E-reducible. Together, \(D_1\) and \(D_2\) entail \(D=P_0(A_1,A_2,\dots ,A_n) \leftarrow P_1(A_1), P_2(A_2), \dots ,\)\( P_n(A_n)\), which can be seen by resolving the literal \(P_0(A_1,A_2,\dots ,A_{n-1})\) from \(D_1\) with the same literal from \(D_2\) to derive D. Thus D is \({{{\mathscr {D}}}}^{a}_{2}\)-E-reducible.
(\({{{\mathscr {D}}}}^{a}_{\infty }\) E-reducibility) For \(a>0\), the fragment \({{{\mathscr {D}}}}^{a}_{\infty }\) has a \({{{\mathscr {D}}}}^{a}_{2}\)-E-reduction.
Let C be any clause in \({{{\mathscr {D}}}}^{a}_{\infty }\). We denote the head of C by \(P(A_1,\dots ,A_n)\), where \(0<n\le a\). The possibility that some of the \(A_i\) are equal does not impact the reasoning.
If \(n=1\), then by definition, there exists a literal \(L_1\) in the body of C such that \(A_1\) occurs in \(L_1\). It is enough to consider the clause \(P(A_1)\leftarrow L_1\) to conclude, because \(P(A_1)\) is the head of C and \(L_1\) belongs to the body of C, thus \(P(A_1)\leftarrow L_1\) entails C, and this clause belongs to \({{{\mathscr {D}}}}^{a}_{2}\).
In the case where \(n>1\), there must exist literals \(L_1,\dots ,L_n\) in the body of C such that \(A_i\) occurs in \(L_i\) for \(i\in \{1,\dots ,n\}\). Consider the clause \(C' = P(A_1,\dots ,A_n) \leftarrow L_1,\dots ,L_n\). There are a few things to stress about \(C'\):
The clause \(C'\) belongs to \({{{\mathscr {D}}}}^{a}_{\infty }\).
Some \(L_i\) may be identical with each other, since the \(A_i\)s may occur together in literals or simply be equal, but this scenario does not impact the reasoning.
The clause \(C'\) entails C because \(C'\) is equivalent to a subset of C (but this subset may be distinct from \(C'\) due to \(C'\) possibly including some extra duplicated literals).
Now consider the clause \(D = P(A_1,\dots ,A_n)\leftarrow P_1(A_1),\dots ,P_n(A_n)\). For \(i\in \{1,\dots ,n\}\), the clause \(P_i(A_i)\leftarrow L_i\) belongs to \({{{\mathscr {D}}}}^{a}_{2}\) by definition, thus \({{{\mathscr {D}}}}^{a}_{2}\cup D\vdash D'\) where \(D'=P(A_1,\dots ,A_n)\leftarrow L_1,\dots ,L_n\). Moreover, by Lemma 3, D is \({{{\mathscr {D}}}}^{a}_{2}\)-E-reducible, hence \(D'\) is also \({{{\mathscr {D}}}}^{a}_{2}\)-E-reducible. Note that this notation hides the fact that if a variable occurs in distinct body literals \(L_i\) in \(C'\), this connection is not captured in \(D'\) where distinct variables will occur instead, thus there is no guarantee that \(D'=C'\). For example, if \(C'=P(A_1,A_2)\leftarrow Q(A_1,B,A_2),R(A_2,B)\) then \(D'=P(A_1,A_2)\leftarrow Q(A_1,B,A_2'),Q(A_1,B,A_2'),R(A_2,B'),R(A_2,B')\) However, it always holds that \(D'\models C'\), because \(D'\) subsumes \(C'\). In our small example, it is enough to consider the substitution \(\theta =\{B'/B,A_2'/A_2\}\) to observe this. Thus by transitivity of entailment, we can conclude that C is \({{{\mathscr {D}}}}^{a}_{2}\)-E-reducible. \(\square \)
As Table 7 shows, not all of the fragments can be D-reduced to \({{{\mathscr {D}}}}^{a}_{2}\). In particular, the result that \({{{\mathscr {D}}}}^{2}_{\infty }\) has no \({{{\mathscr {D}}}}^{2}_{2}\)-D-reduction follows from Theorem 7 because the counterexamples presented in the proof also belong to \({{{\mathscr {D}}}}^{2}_{\infty }\).
Table 9 summarises our theoretical results from this section. Theorem 9 shows that \({{{\mathscr {D}}}}^{a}_{\infty }\) never has a \({{{\mathscr {D}}}}^{a}_{a-1}\)-S-reduction. This result differs from the connected fragment where \({{{\mathscr {C}}}}^{a}_{\infty }\) could always be S-reduced to \({{{\mathscr {C}}}}^{a}_{2}\). However, Theorem 9 shows that \({{{\mathscr {D}}}}^{a}_{\infty }\) can always be S-reduced to \({{{\mathscr {D}}}}^{a}_{a}\). As with the connected fragment, Theorem 11 shows that \({{{\mathscr {D}}}}^{a}_{\infty }\) can always be E-reduced to \({{{\mathscr {C}}}}^{a}_{2}\). The result that \({{{\mathscr {D}}}}^{2}_{\infty }\) has no D-reduction follows from Theorem 7.
Table 9 Existence of a S-, E- or D-reduction of \({{{\mathscr {D}}}}^{a}_{\infty }\) to \({{{\mathscr {D}}}}^{a}_{2}\)
Singleton-free (\({{{\mathscr {K}}}}^{a}_{m}\)) results
It is common in ILP to require that all the variables in a clause appear at least twice (Cropper and Muggleton 2014; Muggleton and Feng 1990; De Raedt and Bruynooghe 1992), which essentially eliminates singleton variables. We call this fragment the singleton-free fragment:
(Singleton-free) A clause is singleton-free if each first-order variable appears at least twice
For example, if we look at the E-reductions of the connected fragment \({{{\mathscr {C}}}}^{\{1,2\}}_{5}\) shown in Table 5 then the clause \(P(A) \leftarrow Q(B,A)\) is not singleton-free because the variable B only appears once. We denote the singleton-free fragment of \({{{\mathscr {D}}}}^{a}_{m}\) as \({{{\mathscr {K}}}}^{a}_{m}\). Table 10 shows the results of applying the reduction algorithms to \({{{\mathscr {K}}}}^{a}_{5}\). Table 11 shows the reductions of \({{{\mathscr {K}}}}^{\{2\}}_{5}\). Reductions for other singleton-free fragments are in Appendix "A.3".
Table 10 Cardinality and maximal body size of the reductions of \({{{\mathscr {K}}}}^{a}_{5}\)
Table 11 Reductions of the singleton-free fragment \({{{\mathscr {K}}}}^{\{2\}}_{5}\)
Unlike in the connected and Datalog cases, the fragment \({{{\mathscr {K}}}}^{\{2\}}_{5}\) is no longer S-reducible to \({{{\mathscr {K}}}}^{\{2\}}_{2}\). We show that \({{{\mathscr {K}}}}^{2}_{\infty }\) cannot be reduced to \({{{\mathscr {K}}}}^{2}_{2}\).
(\({{{\mathscr {K}}}}^{2}_{\infty }\) S-reducibility) The fragment \({{{\mathscr {K}}}}^{2}_{\infty }\) does not have a \({{{\mathscr {K}}}}^{2}_{2}\)-S-reduction.
As a counter-example, consider the clause:
$$\begin{aligned} C=P(A,B) \leftarrow Q(A,D), R(A,D), S(B,C), T(B,C) \end{aligned}$$
Consider removing any non-empty subset of literals from the body of C. Doing so leads to a singleton variable in the remaining clause, so it is not a singleton-free clause. Moreover, for any other clause to subsume C it must be more general than C, but that is not possible again because of the singleton-free constraint.Footnote 18\(\square \)
We can likewise show that this result holds in the general case:
(\({{{\mathscr {K}}}}^{a}_{\infty }\) S-reducibility) For \(a\ge 2\), the fragment \({{{\mathscr {K}}}}^{a}_{\infty }\) does not have a \({{{\mathscr {K}}}}^{a}_{2a-1}\)-S-reduction.
We generalise the clause C from the proof of Proposition 16 to define the clause \(C_a = P(A_1,\dots ,A_a)\leftarrow P_1(A_1,B_1),P_2(A_1,B_1),\dots ,P_{2a-1}(A_a,B_a),P_{2a}(A_a,B_a)\). The same reasoning applies to \(C_a\) as to \(C (= C_2)\), making \(C_a\) irreducible in \({{{\mathscr {K}}}}^{a}_{\infty }\). Moreover \(C_a\) is of body size 2a, thus \(C_a\) is a counterexample to a \({{{\mathscr {K}}}}^{a}_{2a-1}\)-S-reduction of \({{{\mathscr {K}}}}^{a}_{\infty }\). \(\square \)
However, all the fragments can be E-reduced to \({{{\mathscr {K}}}}^{a}_{2}\).
(\({{{\mathscr {K}}}}^{a}_{\infty }\) E-reducibility) For \(a>0\), the fragment \({{{\mathscr {K}}}}^{a}_{\infty }\) has a \({{{\mathscr {K}}}}^{a}_{2}\)-E-reduction.
The proof of Theorem 13 is an adaptation of that of Theorem 11. The only difference is that if \(n=1\) then \(P(A_1)\leftarrow L_1,L_1\) must be considered instead of \(P(A_1)\leftarrow L_1\) to ensure the absence of singleton variables in the body of the clause, and for the same reason, in the general case, the clause \(D'=P(A_1,\dots ,A_n)\leftarrow L_1,...,L_n\) must be replaced by \(D'=P(A_1,\dots ,A_n)\leftarrow L_1,L_1,\dots ,L_n,L_n\). Note that \(C'\) is not modified and thus may or may not belong to \({{{\mathscr {K}}}}^{a}_{\infty }\). However, it is enough that \(C'\in {{{\mathscr {D}}}}^{a}_{\infty }\). With these modifications, the proof carries from \({{{\mathscr {K}}}}^{a}_{\infty }\) to \({{{\mathscr {K}}}}^{a}_{2}\) as from \({{{\mathscr {D}}}}^{a}_{\infty }\) to \({{{\mathscr {D}}}}^{a}_{2}\), including the results in Lemma 3. \(\square \)
Table 12 summarises our theoretical results from this section. Theorem 12 shows that for \(a\ge 2\), the fragment \({{{\mathscr {K}}}}^{a}_{\infty }\) does not have a \({{{\mathscr {K}}}}^{a}_{2a-1}\)-S-reduction. This result contrasts with the Datalog fragment where \({{{\mathscr {D}}}}^{a}_{\infty }\) always has a \({{{\mathscr {D}}}}^{a}_{a}\)-S-reduction. As is becoming clear, adding more restrictions to a fragment typically results in less S-reducibility. By contrast, as with the connected and Datalog fragments, Theorem 13 shows that fragment \({{{\mathscr {K}}}}^{a}_{\infty }\) always has a \({{{\mathscr {K}}}}^{a}_{2}\)-E-reduction. In addition, as with the other fragments, \({{{\mathscr {K}}}}^{a}_{\infty }\) has no D-reduction for \(a\ge 2\).
Table 12 Existence of a S-, E- or D-reduction of \({{{\mathscr {K}}}}^{a}_{\infty }\) to \({{{\mathscr {K}}}}^{a}_{2}\)
Duplicate-free (\({{{\mathscr {U}}}}^{a}_{m}\)) results
The previous three fragments are general in the sense that they have been widely used in ILP. By contrast, the final fragment that we consider is of particular interest to MIL. Table 1 shows a selection of metarules commonly used in the MIL literature. These metarules have been successfully used despite no theoretical justification. However, if we consider the reductions of the three fragments so far, the identity, precon, and postcon metarules do not appear in any reduction. These metarules can be derived from the reductions, typically using either the \(P(A) \leftarrow Q(A,A)\) or \(P(A,A) \leftarrow Q(A)\) metarules. To try to identify a reduction which more closely matches the metarules shown in Table 1, we consider a fragment that excludes clauses in which a literal contains multiple occurrences of the same variable. For instance, this fragment excludes the previously mentioned metarules and also excludes the metarule \(P(A,A) \leftarrow Q(B,A)\), which was in the D-reduction shown in Table 5. We call this fragment duplicate-free. It is a sub-fragment of \({{{\mathscr {K}}}}^{a}_{m}\) and we denote it as \({{{\mathscr {U}}}}^{a}_{m}\).
Table 13 shows the reductions for the fragment \({{{\mathscr {U}}}}^{\{1,2\}}_{5}\). Reductions for other duplicate-free fragments are in Appendix "A.4". As Table 13 shows, the D-reduction of \({{{\mathscr {U}}}}^{\{1,2\}}_{5}\) contains some metarules commonly used in the MIL literature. For instance, it contains the \({\textit{identity}}_1\), \({\textit{didentity}}_2\), and precon metarules. We use the metarules shown in Table 13 in Experiments 1 and 2 (Sects. 6.1 and 6.2) to learn Michalski trains solutions and string transformation programs respectively.
Table 14 shows the results of applying the reduction algorithms to \({{{\mathscr {U}}}}^{a}_{5}\) for different values of a. All the theoretical results that hold for the singleton-free fragments hold similarly for the duplicate-free fragments for the following reasons:
(S) The clauses in the proofs of Proposition 16 and Theorem 12 belong to \({{{\mathscr {U}}}}^{a}_{\infty }\).
(E) If the clause C considered initially in the proof of Theorem 13 belongs to \({{{\mathscr {U}}}}^{a}_{\infty }\), then all the subsequent clauses in that proof are also duplicate-free.
(D) In the proof of Theorem 7, the \(C_{I_m}\) family of clauses all belong to \({{{\mathscr {U}}}}^{a}_{\infty }\).
Thus Table 12 is also a summary of the S-, E- and D-reduction results of \({{{\mathscr {U}}}}^{a}_{\infty }\) to \({{{\mathscr {U}}}}^{a}_{2}\).
Table 13 Reductions of the fragment \({{{\mathscr {U}}}}^{\{1,2\}}_{5}\)
Table 14 Cardinality and body size of the reductions of \({{{\mathscr {U}}}}^{a}_{5}\)
We started this section with three goals (G1, G2, and G3). Table 15 summarises the results towards these goals for fragments of metarules relevant to ILP (Table 3). For G1, our results are mostly empirical, i.e. the results are the outputs of the reduction algorithms. For G2, Table 15 shows that the results are all positive for E-reduction, but mostly negative for S- and D-reduction, especially for Datalog fragments. Similarly, for G3 the results are again positive for E-reduction but negative for S- and D-reduction for Datalog fragments. We discuss the implications of these results in Sect. 7.
Table 15 Existence of a S-, E- or D-reduction of \({{{\mathscr {M}}}}^{{a}}_{\infty }\) to \({{{\mathscr {M}}}}^{{a}}_{2}\)
As explained in Sect. 1, deciding which metarules to use for a given learning task is a major open problem. The problem is the trade-off between efficiency and expressivity: the hypothesis space grows given more metarules (Theorem 1), so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. In this section we experimentally explore this trade-off. As described in Sect. 2, Cropper and Muggleton (2014) showed that learning with E-reduced sets of metarules can lead to higher predictive accuracies and lower learning times compared to learning with non-E-reduced sets. However, as argued in Sect. 1, we claim that E-reduction is not always the most suitable form of reduction because it can remove metarules necessary to learn programs with the appropriate specificity. To test this claim, we now conduct experiments that compare the learning performance of Metagol 2.3.0,Footnote 19 the main MIL implementation, when given different reduced sets of metarules.Footnote 20 We test the null hypothesis:
Null hypothesis 1 There is no difference in the learning performance of Metagol when using different reduced sets of metarules
To test this null hypothesis, we consider three domains: Michalski trains, string transformations, and game rules.
Michalski trains
In the Michalski trains problems (Larson and Michalski 1977) the task is to induce a program that distinguishes eastbound trains from westbound trains. Figure 1 shows an example target program, where the target concept (f/1) is that the train has a long carriage with two wheels and another with three wheels.
An example Michalski trains target program. In the Michalski trains domain, a carriage (car) can be long or short. A short carriage always has two wheels. A long carriage has either two or three wheels
To obtain the experimental data, we first generated 8 random target train programs where the programs are progressively more difficult, where difficulty is measured by the number of literals in the generated program from the easiest task \(\hbox {T}_1\) to the most difficult task \(\hbox {T}_8\). Figure 2 shows the background predicates available to Metagol. We vary the metarules given to Metagol. We use the S-, E-, and D-reductions of the fragment \({{{\mathscr {U}}}}^{\{1,2\}}_{5}\) (Table 13). In addition, we also consider the \({{{\mathscr {U}}}}^{\{1,2\}}_{2}\) fragment of the D-reduction of \({{{\mathscr {U}}}}^{\{1,2\}}_{5}\), i.e. a subset of the D-reduction consisting only of metarules with at most two body literals. This fragment, which we denote as \(D^{*}\), contains three fewer metarules than the D-reduction of \({{{\mathscr {U}}}}^{\{1,2\}}_{5}\). Table 16 shows this fragment.
Background relations available in the trains experiment
Table 16 The \(\hbox {D}^*\) fragment, which is the D-reduction of the fragment \({{{\mathscr {U}}}}^{\{1,2\}}_{5}\) restricted to the fragment \({{{\mathscr {U}}}}^{\{1,2\}}_{2}\)
For each train task \(t_i\) in \(\{T_1,\dots ,T_8\}\):
Generate 10 training examples of \(t_i\), half positive and half negative
Generate 200 testing examples of \(t_i\), half positive and half negative
For each set of metarules m in the S-, E-, D-, and \(D^*\)-reductions:
Learn a program for task \(t_i\) using the training examples and metarules m
Measure the predictive accuracy of the learned program using the testing examples
If a program is not found in 10 min then no program is returned and every testing example is deemed to have failed. We measure mean predictive accuracies, mean learning times, and standard errors over 10 repetitions.
Table 17 Predictive accuracies when using different reduced sets of metarules on the Michalski trains problems
Table 18 Learning times in seconds when using different reduced sets of metarules on the Michalski trains problems
Example programs learned by Metagol when varying the metarule set. The target program is shown in Fig. 1
Table 17 shows the predictive accuracies when learning with the different sets of metarules. The D set generally outperforms the S and E sets with a higher mean accuracy of 88% versus 80% and 73% respectively. Moreover, the \(D^*\) set easily outperforms them all with a mean accuracy of 100%. A McNemar's testFootnote 21 on the D and \(D^*\) accuracies confirmed the significance at the \(p < 0.01\) level.
Table 18 shows the corresponding learning times when using different reduces sets of metarules. The D set outperforms (has lower mean learning time) the S and E sets, and again the \(D^*\) set outperforms them all. A paired t-testFootnote 22 on the D and \(D^*\) learning times confirmed the significance at the \(p < 0.01\) level.
The \(D^*\) set performs particularly well on the more difficult tasks. The poor performance of the S and E sets on the more difficult tasks is for one of two reasons. The first reason is that the S- and E-reduction algorithms have removed the metarules necessary to express the target concept. This observation strongly corroborates our claim that E-reduction can be too strong because it can remove metarules necessary to specialise a clause. The second reason is that the S- and E-reduction algorithms produce sets of metarules that are still sufficient to express the target theory but doing so requires a much larger and more complex program, measured by the number of clauses needed.
The performance discrepancy between the D and \(D^*\) sets of metarules can be explained by comparing the hypothesis spaces searched. For instance, when searching for a program with 3 clauses, Theorem 1 shows that when using the D set of metarules the hypothesis space contains approximately \(10^{24}\) programs. By contrast, when using the \(D^*\) set of metarules the hypothesis space contains approximately \(10^{14}\) programs. As explained in Sect. 3.2, assuming that the target hypothesis is in both hypothesis spaces, the Blumer bound (Blumer et al. 1987) tells us that searching the smaller hypothesis space will result in less error, which helps to explain these empirical results. Of course, there is the potential for the \(D^*\) set to perform worse than the D set when the target theory requires the three removed metarules, but we did not observe this situation in this experiment.
Figure 3 shows the target program for \(\hbox {T}_8\) and example programs learned by Metagol using the various reduced sets of metarules. Only the \(\hbox {D}^*\) program is success set equivalentFootnote 23 to the target program when restricted to the target predicate f/1. In all three cases Metagol discovered that if a carriage has three wheels then it is a long carriage, i.e. Metagol discovered that the literal long(C2) is redundant in the target program. Indeed, if we unfold the \(\hbox {D}^*\) program to remove the invented predicates then the resulting single clause program is one literal shorter than the target program.
Overall, the results from this experiment suggest that we can reject the null hypothesis, both in terms of predictive accuracies and learning times.
String transformations
In Lin et al. (2014) and Cropper and Muggleton (2019) the authors evaluate Metagol on 17 real-world string transformation tasks using a predefined (hand-crafted) set of metarules. In this experiment, we compare learning with different metarules on an expanded dataset with 250 string transformation tasks.
Each string transformation task has 10 examples. Each example is an atom of the form f(x, y) where f is the task name and x and y are strings. Table 19 shows task p6 where the goal is to learn a program that filters the capital letters from the input. We supply Metagol with dyadic background predicates, such as tail, dropLast, reverse, filter_letter, filter_uppercase, dropWhile_not_letter, takeWhile_uppercase. The full details can be found in the code repository. We vary the metarules given to Metagol. We use the S-, E-, and D-reductions of the fragment \({{{\mathscr {U}}}}^{\{2\}}_{5}\). We again also use the D-reduction of the fragment \({{{\mathscr {U}}}}^{\{2\}}_{5}\) restricted to the fragment \({{{\mathscr {U}}}}^{\{2\}}_{2}\), which is again denoted as \(D^*\).
Table 19 Examples of the p6 string transformation problem input–output pairs
Our experimental method is:
Sample 50 tasks Ts from the set \(\{p1,\dots ,p250\}\)
For each \(t \in Ts\):
Sample 5 training examples and use the remaining examples as testing examples
For each set of metarules m in the S-, E-, D, and \(D^*\)-reductions:
Learn a program p for task t using the training examples and metarules m
Measure the predictive accuracy of p using the testing examples
Table 20 shows the mean predictive accuracies and learning times when learning with the different sets of metarules. Note that we are not interested in the absolute predictive accuracy, which is limited by factors such as the low timeout and insufficiency of the BK. We are instead interested in the relative accuracies. Table 20 shows that the D set outperforms the S and E sets, with a higher mean accuracy of 33%, versus 22% and 22% respectively. The \(D^*\) set outperforms them all with a mean accuracy of 56%. A McNemar's test on the D and \(D^*\) accuracies confirmed the significance at the \(p < 0.01\) level.
Table 20 shows the corresponding learning times when varying the metarules. Again, the D set outperforms the S and E sets, and again the \(D^*\) set outperforms them all. A paired t-test on the D and \(D^*\) learning times confirmed the significance at the \(p < 0.01\) level.
Overall, the results from this experiment give further evidence to reject the null hypothesis, both in terms of predictive accuracies and learning times.
Table 20 Experimental results on the string transformation problems
Inducing game rules
The general game playing (GGP) framework (Genesereth et al. 2005) is a system for evaluating an agent's general intelligence across a wide range of tasks. In the GGP competition, agents are tested on games they have never seen before. In each round, the agents are given the rules of a new game. The rules are described symbolically as a logic program. The agents are given a few seconds to think, to process the rules of the game, and to then start playing, thus producing game traces. The winner of the competition is the agent who gets the best total score over all the games. In this experiment, we use the IGGP dataset (Cropper et al. 2019) which inverts the GGP task: an ILP system is given game traces and the task is to learn a set of rules (a logic program) that could have produced these traces.
The IGGP dataset contains problems drawn from 50 games. We focus on the eight games shown in Table 21 which contain BK compatible with the metarule fragments we consider (i.e. the BK contains predicates in the fragment \({{{\mathscr {M}}}}^{{2}}_{m}\)). The other games contain predicates with arity greater than two. Each game has four target predicates legal, next, goal, and terminal, where the arities depend on the game. Figure 4 shows the target solution for the next predicate for the minimal decay game. Each game contains training/validate/test data, composed of sets of ground atoms, in a 4:1:1 split. We vary the metarules given to Metagol. We use the S-, E-, and D-reductions of the fragment \({{{\mathscr {D}}}}^{\{1,2\}}_{5}\). We again also use the D-reduction of the fragment \({{{\mathscr {D}}}}^{\{1,2\}}_{5}\) restricted to the fragment \({{{\mathscr {D}}}}^{\{1,2\}}_{2}\), which is again denoted as \(D^*\).
Table 21 IGGP games used in the experiments
Target solution for the next predicate for the minimal decay game
The majority of game examples are negative. We therefore use balanced accuracy to evaluate the approaches. Given background knowledge B, sets of positive \(E^+\) and negative \(E^-\) testing examples, and a logic program H, we define the number of positive examples as \(p=|E^+|\), the number of negative examples as \(n=|E^-|\), the number of true positives as \(tp=|\{e \in E^+ | B \cup H \models e\}|\), the number of true negatives as \(tn=|\{e \in E^- | B \cup H \not \models e\}|\), and the balanced accuracy \(ba = (tp/p + tn/n)/2\).
Our experimental method is as follows. For each game g, each task \(g_t\), and each set of metarules m in the S-, E-, D-, and \(D^*\)-reductions:
Learn a program p using all the training examples for \(g_t\) using the metarules m with a timeout of 10 min
Measure the balanced accuracy of p using the testing examples
If no program is found in 10 min then no program is returned and every testing example is deemed to have failed.
Table 22 shows the balanced accuracies when learning with the different sets of metarules. Again, we are not interested in the absolute accuracies only the relative differences when learning using different sets of metarules. The D set outperforms the S and E sets with a higher mean accuracy of 72%, versus 66% and 66% respectively. The \(D^*\) set again outperforms them all with a mean accuracy of 73%. A McNemar's test on the D and \(D^*\) accuracies confirmed the significance at the \(p < 0.01\) level. Table 22 shows the corresponding learning times when varying the metarules. Again, the D set outperforms the S and E sets, and again the \(D^*\) set outperforms them all. However, a paired t-test on the D and \(D^*\) learning times confirmed the significance only at the \(p < 0.08\) level, so the difference in learning times is insignificant. Overall, the results from this experiment suggest that we can reject the null hypothesis in terms of predictive accuracies but not learning times.
Table 22 Experimental results on the IGGP data
Conclusions and further work
As stated in Sect. 1, despite the widespread use of metarules, there is little work determining which metarules to use for a given learning task. Instead, suitable metarules are assumed to be given as part of the background knowledge, or are used without any theoretical justification. Deciding which metarules to use for a given learning task is a major open challenge (Cropper 2017; Cropper and Muggleton 2014) and is a trade-off between efficiency and expressivity: the hypothesis space grows given more metarules (Cropper and Muggleton 2014; Lin et al. 2014), so we wish to use fewer metarules, but if we use too few metarules then we lose expressivity. To address this issue, Cropper and Muggleton (2014) used E-reduction on sets of metarules and showed that learning with E-reduced sets of metarules can lead to higher predictive accuracies and lower learning times compared to learning with non-E-reduced sets. However, as we claimed in Sect. 1, E-reduction is not always the most appropriate form of reduction because it can remove metarules necessary to learn programs with the necessary specificity.
To support our claim, we have compared three forms of logical reduction: S-, E-, and D-reduction, where the latter is a new form of reduction based on SLD-derivations. We have used the reduction algorithms to reduce finite sets of metarules. Table 15 summarises the results. We have shown that many sets of metarules relevant to ILP do not have finite reductions (Theorem 7). These negative results have direct (negative) implications for MIL. Specifically, our results mean that, in certain cases, a MIL system, such as Metagol or HEXMIL (Kaminski et al. 2018), cannot be given a finite set of metarules from which it can learn any program, such as when learning arbitrary Datalog programs. The results will also likely have implications for other forms of ILP which rely on metarules.
Our experiments compared learning the performance of Metagol when using the different reduced sets of metarules. In general, using the D-reduced set outperforms both the S- and E-reduced sets in terms of predictive accuracy and learning time. Our experimental results give strong evidence to our claim. We also compared a \(D^*\)-reduced set, a subset of the D-reduced metarules, which, although derivationally incomplete, outperforms the other two sets in terms of predictive accuracies and learning times.
Theorem 7 shows that certain fragments of metarules do not have finite D-reductions. However, our experimental results show that using D-reduced sets of metarules leads to higher predictive accuracies and lower learning times compared to the other forms of reduction. Therefore, our work now opens up a new challenge of overcoming this negative theoretical result. One idea is to explore whether special metarules, such as a currying metarule (Cropper and Muggleton 2016a), could alleviate the issue.
In future work we would also like reduce more general fragments of logic, such as triadic logics, which would allow us to tackle a wider variety or problems, such as more of the games in the IGGP dataset.
We have compared the learning performance of Metagol when using different reduced sets of metarules. However, we have not investigated whether these reductions are optimal. For instance, when considering derivation reductions, it may, in some cases, be beneficial to re-add redundant metarules to the reduced sets to avoid having to derive them through SLD-resolution. In future work, we would like to investigate identifying an optimal set of metarules for a given learning task, or preferably learning which metarules to use for a given learning task.
We have shown that although incomplete the \(D^*\)-reduced set of metarules outperforms the other reductions. In future work we would like to explore other methods which sacrifice completeness for efficiency.
We have used the logical reduction techniques to remove redundant metarules. It may also be beneficial to simultaneously reduce metarules and standard background knowledge. The idea of purposely removing background predicates is similar to dimensionality reduction, widely used in other forms of machine learning (Skillicorn 2007), but which has been under researched in ILP (Fürnkranz 1997). Initial experiments indicate that this is possible (Cropper 2017; Cropper and Muggleton 2014), and we aim to develop this idea in future work.
Metarules are also called program schemata (Flener 1996), second-order schemata (De Raedt and Bruynooghe 1992), and clause templates (Albarghouthi et al. 2017), amongst many other names.
The fully quantified rule is \(\exists P \exists Q \exists R \forall A \forall B \forall C \; P(A,B) \leftarrow Q(A,C), R(C,B)\).
A chained dyadic Datalog clause has the restriction that every first-order variable in a clause appears in exactly two literals and a path connects every literal in the body of C to the head of C. In other words, a chained dyadic Datalog clause has the form \(P_0(X_0,X_1) \leftarrow P_1(X_0,X_2), P_2(X_2,X_3), \dots , P_n(X_n,X_1)\) where the order of the arguments in the literals does not matter.
We use \(\vdash \) to represent derivability of both first-order and second-order clauses. In practice we reason about second-order clauses using first-order resolution via encapsulation (Cropper and Muggleton 2014), which we describe in Sect. 3.3.
Although the MIL problem has also been encoded as an ASP problem (Kaminski et al. 2018).
MIL uses example driven test incorporation for finding consistent programs as opposed to the generate-and-test approach of clause refinement.
Datalog also imposes additional constraints on negation in the body of a clause, but because we disallow negation in the body we omit these constraints for simplicity.
By more general we mean we focus on metarules that are independent of any particular ILP problem with particular predicate and constant symbols.
For instance, the metarule \(P(A) \leftarrow \) entails and subsumes every metarule with a monadic head.
The Blumer bound is a reformulation of Lemma 2.1 in Blumer et al. (1987).
In practice we use more efficient algorithms for each approach. For instance, in the derivation reduction Prolog implementation we use the knowledge gained from Lemma 1 to add pruning so as to ignore clauses that are too large to be useful to check whether a clause is derivable.
Rename the variables in \(M_4\) to form \(M_4' = P_0(X_1,X_2) \leftarrow P_1(X_2,X_3),P_2(X_1,X_4),P_3(X_1,X_4),\)\(P_4(X_2,X_3)\). Then \(M_4' \theta = P(A,B) \leftarrow R(B,B),Q(A,A),Q(A,A),R(B,B)\) where \(\theta =\{P_0/P,P_1/R,P_2/Q,P_3/Q,\)\(P_4/R,X_1/A,X_2/B,X_3/B,X_4/A\}\). It follows that \(M_4' \theta \subseteq M_2\), so \(M_4 \preceq M_2\), which in turn implies \(M_4 \models M_2\).
Rename the variables in \(M_4\) to form \(M_4' = P_0(X_1,X_2) \leftarrow P_1(X_2,X_3),P_2(X_1,X_4),P_3(X_1,X_4),\)\(P_4(X_2,X_3)\). Then \(M_4' \theta = P(A,B) \leftarrow R(B,C),Q(A,C),Q(A,C),R(B,C)\) where \(\theta =\{P_0/P,P_1/R,P_2/Q,P_3/Q,\)\(P_4/R,X_1/A,X_2/B,X_3/C,X_4/C\}\). It follows that \(M_4' \theta \subseteq M_3\), so \(M_4 \preceq M_3\), which in turn implies \(M_4 \models M_3\).
Rename the variables in \(M_3\) to form \(M_3' = P_0(X,Y) \leftarrow P_1(X,Z),P_2(Y,Z)\). Resolve the first body literal of \(M_2\) with \(M_3\) to form \(R_1 = P(A,B) \leftarrow P_1(A,Z),P_2(A,Z),R(B,B)\). Rename the variables \(P_1\) to \(P_3\), \(P_2\) to \(P_4\), and Z to \(Z_1\) in \(R_1\) (to standardise apart the variables) to form \(R_2 = P(A,B) \leftarrow P_3(A,Z_1),P_4(A,Z_1),R(B,B)\). Resolve the last body literal of \(R_2\) with \(M_3'\) to form \(R_3 = P(A,B) \leftarrow P_3(A,Z_1),P_4(A,Z_1),P_1(B,Z),P_2(B,Z)\). Rename the variables \(Z_1\) to D, ZtoC, \(P_3\) to R, \(P_4\) to S, \(P_1\) to Q, and \(P_2\) to T in \(R_3\) to form \(R_4 = P(A,B) \leftarrow R(A,D),S(A,D),Q(B,C),T(B,C)\).Thus, \(R_4 = M_4\), so it follows that and \(\{M_2,M_3\} \models M_4\)
The entailment and derivation reduction algorithms often took 4–5 h to find a reduction. However, in some cases, typically where the fragments contained many metarules, the algorithms took around 12 h to find a reduction. By contrast, the subsumption reduction algorithm typically found a reduction in 30 min.
Connected clauses are also known as linked clauses (Gottlob et al. 1997).
Those are the only options to derive \(C_I\).Otherwise, e.g. with \(C_2 = H(A',C') \leftarrow Q(A',D')\), the resulting clause is not \(C_I\) because \(D'\) is not unified with any of the variables in \(C_1\) (whereas \(A'\) unifies with A and \(C'\) with C), e.g. the result includes the literal \(Q(A,D')\) instead of Q(A, C) hence it is not \(C_I\).
Note that this proof also shows that \({{{\mathscr {K}}}}^{2}_{\infty }\) does not have a \({{{\mathscr {K}}}}^{2}_{3}\)-S-reduction.
https://github.com/metagol/metagol/releases/tag/2.3.0.
Experimental data is available at http://github.com/andrewcropper/mlj19-reduce.
A statistical test on paired nominal data https://en.wikipedia.org/wiki/McNemar%27s_test.
A statistical test on paired ordinal data http://www.biostathandbook.com/pairedttest.html.
The success set of a logic program P is the set of ground atoms \(\{A \in hb(P)|P\cup \{ \lnot A \}\;\text {has a SLD-refutation}\}\), where hb(P) represents the Herband base of the logic program P. The success set restricted to a specific predicate symbol p is the subset of the success set restricted to atoms containing the predicate symbol p.
Albarghouthi, A., Koutris, P., Naik, M., & Smith, C. (2017). Constraint-based synthesis of Datalog programs. In J. C. Beck (Ed.), Principles and practice of constraint programming—23rd international conference, CP 2017, Melbourne, VIC, Australia, August 28–September 1, 2017, Proceedings, volume 10416 of Lecture Notes in Computer Science (pp. 689–706). Springer.
Bienvenu, M. (2007). Prime implicates and prime implicants in modal logic. In Proceedings of the twenty-second AAAI conference on artificial intelligence, July 22–26, 2007, Vancouver, BC, Canada (pp. 379–384). AAAI Press.
Blumer, A., Ehrenfeucht, A., Haussler, D., & Warmuth, M. K. (1987). Occam's razor. Information Processing Letters, 24(6), 377–380.
Bradley, A. R., & Manna, Z. (2007). The calculus of computation-decision procedures with applications to verification. Berlin: Springer.
Campero, A., Pareja, A., Klinger, T., Tenenbaum, J., & Riedel, S. (2018). Logical rule induction and theory learning using neural theorem proving. ArXiv e-prints, September 2018.
Church, A. (1936). A note on the Entscheidungsproblem. The Journal of Symbolic Logic, 1(1), 40–41.
Cohen, W. W. (1994). Grammatically biased learning: Learning logic programs using an explicit antecedent description language. Artificial Intelligence, 68(2), 303–366.
Cropper, A. (2017). Efficiently learning efficient programs. Ph.D. thesis, Imperial College London, UK.
Cropper, A., Evans, R., & Law, M. (2019). Inductive general game playing. ArXiv e-prints, arXiv:1906.09627, Jun 2019.
Cropper, A., & Muggleton, S. H. (2014). Logical minimisation of meta-rules within meta-interpretive learning. In J. Davis & J. Ramon (Eds.), Inductive logic programming—24th international conference, ILP 2014, Nancy, France, September 14–16, 2014. Revised selected papers, volume 9046 of Lecture Notes in Computer Science (pp. 62–75). Springer.
Cropper, A., & Muggleton, S. H. (2015). Learning efficient logical robot strategies involving composable objects. In Yang, Q., & Wooldridge, M. (Eds.), Proceedings of the twenty-fourth international joint conference on artificial intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25–31, 2015 (pp. 3423–3429). AAAI Press.
Cropper, A., & Muggleton, S. H. (2016a). Learning higher-order logic programs through abstraction and invention. In Kambhampati, S. (Ed.), Proceedings of the twenty-fifth international joint conference on artificial intelligence, IJCAI 2016, New York, NY, USA, 9–15 July 2016 (pp. 1418–1424). IJCAI/AAAI Press.
Cropper, A., & Muggleton, S. H. (2016b). Metagol system. https://github.com/metagol/metagol. Accessed 1 July 2019.
Cropper, A., & Muggleton, S. H. (2019). Learning efficient logic programs. Machine Learning, 108(7), 1063–1083.
Cropper, A., Tamaddoni-Nezhad, A., & Muggleton, S. H. (2015). Meta-interpretive learning of data transformation programs. In Inoue, K., Ohwada, H., & Yamamoto, A. (Eds.), Inductive logic programming—25th international conference, ILP 2015, Kyoto, Japan, August 20–22, 2015, revised selected papers, volume 9575 of Lecture Notes in Computer Science (pp. 46–59). Springer.
Cropper, A., & Tourret, S. (2018). Derivation reduction of metarules in meta-interpretive learning. In Riguzzi, F., Bellodi, E., & Zese, R. (Eds.), Inductive logic programming—28th international conference, ILP 2018, Ferrara, Italy, September 2–4, 2018, proceedings, volume 11105 of Lecture Notes in Computer Science (pp. 1–21). Springer.
Dantsin, E., Eiter, T., Gottlob, G., & Voronkov, A. (2001). Complexity and expressive power of logic programming. ACM Computing Surveys, 33(3), 374–425.
De Raedt, L. (2012). Declarative modeling for machine learning and data mining. In Algorithmic learning theory—23rd international conference, ALT 2012, Lyon, France, October 29–31, 2012. proceedings (p. 12).
De Raedt, L., & Bruynooghe, M. (1992). Interactive concept-learning and constructive induction by analogy. Machine Learning, 8, 107–150.
Echenim, M., Peltier, N., & Tourret, S. (2015). Quantifier-free equational logic and prime implicate generation. In A. P. Felty & A. Middeldorp (Eds.), Automated deduction—CADE-25–25th international conference on automated deduction, Berlin, Germany, August 1–7, 2015, proceedings, volume 9195 of Lecture Notes in Computer Science (pp. 311–325). Springer.
Emde, W., Habel, C., & Rollinger, C.-R. (1983). The discovery of the equator or concept driven learning. In M. Alanbundy (Ed.), Proceedings of the 8th international joint conference on artificial intelligence. Karlsruhe, FRG, August 1983 (pp. 455–458). William Kaufmann.
Evans, R., & Grefenstette, E. (2018). Learning explanatory rules from noisy data. Journal of Artificial Intelligence Research, 61, 1–64.
Flener, P. (1996). Inductive logic program synthesis with DIALOGS. In Muggleton, S. (Ed.), Inductive logic programming, 6th international workshop, ILP-96, Stockholm, Sweden, August 26–28, 1996, selected papers, volume 1314 of Lecture Notes in Computer Science (pp. 175–198). Springer.
Fonseca, N. A., Costa, V. S., Silva, F. M. A., & Camacho, R. (2004). On avoiding redundancy in inductive logic programming. In R. Camacho, R. D. King & A. Srinivasan (Eds.), Inductive logic programming, 14th international conference, ILP 2004, Porto, Portugal, September 6–8, 2004, proceedings, volume 3194 of Lecture Notes in Computer Science (pp. 132–146). Springer.
Fürnkranz, J. (1997). Dimensionality reduction in ILP: A call to arms. In Proceedings of the IJCAI-97 workshop on frontiers of inductive logic programming (pp. 81–86).
Garey, M. R., & Johnson, D. S. (1979). Computers and intractability: A guide to the theory of NP-completeness. New York: W. H. Freeman.
Genesereth, M. R., Love, N., & Pell, B. (2005). General game playing: Overview of the AAAI competition. AI Magazine, 26(2), 62–72.
Gottlob, G., & Fermüller, C. G. (1993). Removing redundancy from a clause. Artificial Intelligence, 61(2), 263–289.
Gottlob, G., Leone, N., & Scarcello, F.(1997). On the complexity of some inductive logic programming problems. In N. Lavrac & S. Dzeroski (Eds.), Inductive logic programming, 7th international workshop, ILP-97, Prague, Czech Republic, September 17–20, 1997, proceedings, volume 1297 of Lecture Notes in Computer Science (pp. 17–32). Springer.
Hemaspaandra, E., & Schnoor, H. (2011). Minimization for generalized boolean formulas. In T. Walsh (Ed.), IJCAI 2011, proceedings of the 22nd international joint conference on artificial intelligence, Barcelona, Catalonia, Spain, July 16–22, 2011 (pp. 566–571). IJCAI/AAAI.
Heule, M., Järvisalo, M., Lonsing, F., Seidl, M., & Biere, A. (2015). Clause elimination for SAT and QSAT. Artificial Intelligence Research, 53, 127–168.
Hillenbrand, T., Piskac, R., Waldmann, U., & Weidenbach, C. (2013). From search to computation: Redundancy criteria and simplification at work. In A. Voronkov, & C. Weidenbach (Eds.), Programming logics - essays in memory of Harald Ganzinger, volume 7797 of Lecture Notes in Computer Science (pp. 169–193). Springer.
Joyner, W. H, Jr. (1976). Resolution strategies as decision procedures. Journal of the ACM, 23(3), 398–417.
Kaminski, T., Eiter, T., & Inoue, K. (2018). Exploiting answer set programming with external sources for meta-interpretive learning. TPLP, 18(3–4), 571–588.
Kietz, J.-U., & Wrobel, S. (1992). Controlling the complexity of learning in logic through syntactic and task-oriented models. In Inductive logic programming. Citeseer.
Kowalski, R. A. (1974). Predicate logic as programming language. In IFIP congress (pp. 569–574).
Larson, J., & Michalski, R. S. (1977). Inductive inference of VL decision rules. SIGART Newsletter, 63, 38–44.
Liberatore, P. (2005). Redundancy in logic I: CNF propositional formulae. Artificial Intelligence, 163(2), 203–232.
Liberatore, P. (2008). Redundancy in logic II: 2CNF and Horn propositional formulae. Artificial Intelligence, 172(2–3), 265–299.
Lin, D., Dechter, E., Ellis, K., Tenenbaum, J. B., & Muggleton, S. (2014). Bias reformulation for one-shot function induction. In ECAI 2014—21st European conference on artificial intelligence, 18–22 August 2014, Prague, Czech Republic—including prestigious applications of intelligent systems (PAIS 2014) (pp. 525–530).
Lloyd, J. W. (1987). Foundations of logic programming (2nd ed.). Berlin: Springer.
Lloyd, J. W. (2003). Logic for learning. Berlin: Springer.
Marcinkowski, J., & Pacholski, L. (1992). Undecidability of the Horn-clause implication problem. In 33rd annual symposium on foundations of computer science, Pittsburgh, Pennsylvania, USA, 24–27 October 1992 (pp. 354–362).
Marquis, P. (2000). Consequence finding algorithms. In Handbook of defeasible reasoning and uncertainty management systems (pp. 41–145). Springer.
McCarthy, J. (1995). Making robots conscious of their mental states. In Machine intelligence 15, intelligent Agents [St. Catherine's College, Oxford, July 1995] (pp. 3–17).
Morel, R., Cropper, A., & Ong, C.-H. Luke (2019). Typed meta-interpretive learning of logic programs. In Calimeri, F., Leone, N., & Manna, M. (Eds.), Logics in artificial intelligence—16th European conference, JELIA 2019, Rende, Italy, May 7–11, 2019, proceedings, volume 11468 of Lecture Notes in Computer Science (pp. 198–213). Springer.
Muggleton, S. (1995). Inverse entailment and Progol. New Generation Computing, 13(3&4), 245–286.
Muggleton, S., De Raedt, L., Poole, D., Bratko, I., Flach, P. A., Inoue, K., et al. (2012). ILP turns 20-biography and future challenges. Machine Learning, 86(1), 3–23.
Muggleton, S., & Feng, C. (1990). Efficient induction of logic programs. In Algorithmic learning theory, first international workshop, ALT '90, Tokyo, Japan, October 8–10, 1990, proceedings (pp. 368–381).
Muggleton, S. H., Lin, D., Pahlavi, N., & Tamaddoni-Nezhad, A. (2014). Meta-interpretive learning: Application to grammatical inference. Machine Learning, 94(1), 25–49.
Muggleton, S. H., Lin, D., & Tamaddoni-Nezhad, A. (2015). Meta-interpretive learning of higher-order dyadic Datalog: Predicate invention revisited. Machine Learning, 100(1), 49–73.
Nédellec, C., Rouveirol, C., Adé, H., Bergadano, F., & Tausend, B. (1996). Declarative bias in ILP. Advances in inductive logic programming, 32, 82–103.
Nienhuys-Cheng, S.-H., & de Wolf, R. (1997). Foundations of inductive logic programming. New York, Secaucus, NJ: Springer.
Plotkin, G.D. (1971). Automatic methods of inductive inference. Ph.D. thesis, Edinburgh University, August 1971.
Robinson, J. A. (1965). A machine-oriented logic based on the resolution principle. Journal of the ACM, 12(1), 23–41.
Schmidt-Schauß, M. (1988). Implication of clauses is undecidable. Theoretical Computer Science, 59, 287–296.
Shapiro, E. Y. (1983). Algorithmic program debugging. London: MIT Press.
Si, X., Lee, W., Zhang, R., Albarghouthi, A., Koutris, P., & Naik, M. (2018). Syntax-guided synthesis of Datalog programs. In G. T. Leavens, A. Garcia, & C. S. Pasareanu (Eds.), Proceedings of the 2018 ACM joint meeting on european software engineering conference and symposium on the foundations of software engineering, ESEC/SIGSOFT FSE 2018, Lake Buena Vista, FL, USA, November 04–09, 2018 (pp. 515–527). ACM.
Skillicorn, D. (2007). Understanding complex datasets: Data mining with matrix decompositions. New York: Chapman and Hall/CRC.
Tärnlund, S. Å. (1977). Horn clause computability. BIT, 17(2), 215–226.
Tourret, S., & Cropper, A. (2019). SLD-resolution reduction of second-order Horn fragments. In F. Calimeri, N. Leone & M. Manna (Eds.), Logics in artificial intelligence—16th European conference, JELIA 2019, Rende, Italy, May 7–11, 2019, proceedings, volume 11468 of Lecture Notes in Computer Science (pp. 259–276). Springer.
Wang, W. Y., Mazaitis, K., & Cohen, W. W. (2014). Structure learning via parameter learning. In Li, J., Wang, X. S., Garofalakis, M. N., Soboroff, I., Suel, T., & Wang, M. (Eds.), Proceedings of the 23rd ACM international conference on conference on information and knowledge management, CIKM 2014, Shanghai, China, November 3–7, 2014 (pp. 1199–1208). ACM.
Weidenbach, C., & Wischnewski, P. (2010). Subterm contextual rewriting. AI Communications, 23(2–3), 97–109.
The authors thank Stephen Muggleton and Katsumi Inoue for discussions on this topic. We especially thank Rolf Morel for valuable feedback on the paper.
University of Oxford, Oxford, UK
Andrew Cropper
Max Planck Institute for Informatics, Saarbrücken, Germany
Sophie Tourret
Search for Andrew Cropper in:
Search for Sophie Tourret in:
Correspondence to Andrew Cropper.
Editors: Dimitar Kazakov and Filip Zelezny.
Detailed reduction results
Connected (\({{{\mathscr {C}}}}^{a}_{m}\)) reductions
See Tables 23 and 24.
Table 23 Reductions of the connected fragment \({{{\mathscr {C}}}}^{\{2\}}_{5}\)
Table 24 Reductions of the connected fragment \({{{\mathscr {C}}}}^{\{1,2\}}_{5}\)
Datalog (\({{{\mathscr {D}}}}^{a}_{m}\)) reductions
See Tables 25, 26 and 27.
Table 25 Reductions of the Datalog fragment \({{{\mathscr {D}}}}^{\{0,1,2\}}_{5}\)
Table 26 Reductions of the Datalog fragment \({{{\mathscr {D}}}}^{\{1,2\}}_{5}\)
Table 27 Reductions of the Datalog fragment \({{{\mathscr {D}}}}^{\{2\}}_{5}\)
Table 29 Reductions of the singleton-free fragment \({{{\mathscr {K}}}}^{\{1,2\}}_{5}\)
Table 30 Reductions of the singleton-free fragment \({{{\mathscr {K}}}}^{\{0,1,2\}}_{5}\)
Table 31 Reductions of the fragment \({{{\mathscr {U}}}}^{\{2\}}_{5}\)
Table 33 Reductions of the fragment \({{{\mathscr {U}}}}^{\{0,1,2\}}_{5}\)
Cropper, A., Tourret, S. Logical reduction of metarules. Mach Learn (2019) doi:10.1007/s10994-019-05834-x
Revised: 12 July 2019
DOI: https://doi.org/10.1007/s10994-019-05834-x
Inductive logic programming
Program induction
Inductive programming
Special Issue of the Inductive Logic Programming (ILP) 2019
|
CommonCrawl
|
\begin{document}
{\Large \bf Rigid and Non-Rigid Mathematical \\ \\
Theories : the Ring $\mathbb{Z}$ Is Nearly Rigid} \\
{\bf Elem\'{e}r E Rosinger} \\ Department of Mathematics \\ and Applied Mathematics \\ University of Pretoria \\ Pretoria \\ 0002 South Africa \\ [email protected] \\
{\it Dedicated to Marie-Louise Nykamp} \\ \\
{\bf Abstract} \\
Mathematical theories are classified in two distinct classes : {\it rigid}, and on the other hand, {\it non-rigid} ones. Rigid theories, like group theory, topology, category theory, etc., have a basic concept - given for instance by a set of axioms - from which all the other concepts are defined in a unique way. Non-rigid theories, like ring theory, certain general enough pseudo-topologies, etc., have a number of their concepts defined in a more free or relatively independent manner of one another, namely, with {\it compatibility} conditions between them only. As an example, it is shown that the usual ring structure on the integers $\mathbb{Z}$ is not rigid, however, it is nearly rigid. \\ \\
{\large \bf 0. Introduction} \\
Rigid theories, like group theory, topology, category theory, etc., have a basic concept - given for instance by a set of axioms - from which all the other concepts are defined in a unique way. Non-rigid theories, like ring theory, certain general enough pseudo-topologies, etc., have a number of their concepts defined in a more free or relatively independent manner of one another, namely, with {\it compatibility} conditions between them only. \\
One can note that even in Algebra there are nonrigid mathematical structures. For instance, let $( R, +, . )$ be a ring. Then in principle, neither the addition "+" determines the multiplication ".", nor multiplication determines addition. Instead, they are relatively independent of one another, and only satisfy the usual compatibility conditions, namely, the distributivity of multiplication with respect to addition. \\
On the contrary, in groups $( G, \diamond )$, all concepts are defined uniquely based eventually on the underlying set $G$ and the binary operation $\diamond$. \\
Non-rigid mathematical structures need not always form usual Eilenberg - Mac Lane categories, [8,12], but more general ones, as illustrated by the case of certain general enough concepts of pseudo-topology. \\
Such a rather general concept of pseudo-topology was used in constructing differential algebras of generalized functions containing the Schwartz distributions, [1-7,10]. These algebras proved to be convenient in solving large classes of nonlinear partial differential equations, see [10] and the literature cited there, as well as section 46F30 in the Subject Classification 2009 of the American Mathematical Society, at www.ams.org/msc/46Fxx.html \\
And it is precisely because of that non-rigid character that the totality of such pseudo-topologies does {\it no longer} constitute a usual Eilenberg - Mac Lane category, but one which is more general, [8,12]. \\
As it happens, the rigid structure of the usual Hausdorff-Kuratowski-Bourbaki, or in short, HKB concept of topology is also one of the reasons for a number of its important deficiencies, such as for instance, that the category of such topological spaces is not Cartesian closed. \\
Spaces $( \Omega, {\cal M}, \mu )$ with measure, where $\Omega$ is the underlying set, ${\cal M}$ is a $\sigma$-algebra on it, and $\mu : {\cal M} \longrightarrow \mathbb{R}$ is a $\sigma$-additive measure, are further examples of non-rigid structures, since for a given $( \Omega, {\cal M} )$, there can in general be infinitely many associated $\mu$. \\
Topological groups, or even topological vector spaces, are typically non-rigid structures. Indeed, on an arbitrary group, or even vector space, there may in general be many compatible topologies, and even Hausdorff topologies. \\
Obviously, an important advantage of a rigid mathematical structure, and in particular, of the usual HKB concept of topology, is a simplicity of the respective theoretical development. Such simplicity comes from the fact that one can start with only one single concept, like for instance the open sets in the case of HKB topologies, and then based on that concept, all the other concepts can be defined in a unique manner. \\ Consequently, the impression may be created that one has managed to develop a universal theory in the respective discipline, universal in the sense that there may not be any need for alternative theories in that discipline, as for instance is often the perception about the HKB topology. \\
The disadvantage of a rigid mathematical structure is in a consequent built in lack of flexibility regarding the interdependence of the various concepts involved, since each of them, except for a single starting concept, are determined uniquely in terms of that latter one. And in the case of the HKB topologies this is manifested, among others, in the difficulties related to dealing with suitable topologies on spaces of continuous functions, that is, in the failure of the category of such topological spaces to be Cartesian closed. \\
Non-rigid mathematical structures, and in particular, certain general enough pseudo-topologies, can manifest fewer difficulties coming from a lack of flexibility. \\
A disadvantage of such non-rigid mathematical structures - as for instance with various approaches to pseudo-topologies - is in the large variety of ways the respective theories can be set up. Also, their respective theoretical development may turn out to be more complex than is the case with rigid mathematical structures. \\ Such facts can lead to the impression that one could not expect to find a universal enough non-rigid mathematical structure in some given discipline, and for instance, certainly not in the realms of pseudo-topologies. \\
As it happens so far in the literature on pseudo-topologies, there seems not to be a wider and explicit enough awareness about the following two facts
\begin{itemize}
\item one should rather use non-rigid structures in order to avoid the difficulties coming from the lack of flexibility of the rigid concept of usual HKB topology,
\item the likely consequence of using non-rigid structures is the lack of a sufficiently universal concept of pseudo-topology.
\end{itemize}
As it happens, such a lack of awareness leads to a tendency to develop more and more general concepts of pseudo-topology, hoping to reach a sufficiently universal one, thus being able to replace once and for all the usual HKB topology with "THE" one and only "winning" concept of pseudo-topology. \\ Such an unchecked search for increased generality, however, may easily lead to rather meagre theories. \\
It also happens in the literature that, even if mainly intuitively, when setting up various concepts of pseudo-topology a certain restraint is manifested when going away from a rigid theory towards some non-rigid ones. And certainly, the reason for such a restraint is that one would like to hold to the advantage of rigid theories which are more simple to develop than the non-rigid ones. \\
Amusingly, the precedent in Geometry, happened two centuries earlier, is missed from the view both by those who hold to the usual HKB concept of topology, as well as by those trying to set up a general enough pseudo-topology which hopefully may be universal. After all, having by now gotten accustomed that, in fact so fortunately, Geometry can mean many things in different situations, it may perhaps be appropriate to accept a similar view regarding Topology ... \\
Lastly, let us note that in modern Mathematics it is "axiomatic" that theories are built as {\it axiomatic systems}. \\ As it happens, however, ever since the early 1930s and G\"{o}del's Incompleteness Theorems, we cannot disregard the consequent deeply inherent limitation of all axiomatic mathematical theories. \\ And that limitation cannot be kept away from nonrigid mathematical structures either, since such structures are also built as axiomatic theories. \\
And that G\"{o}delian limitation comes to further suggest the answer to the issue of what is Topology, is it the HKB one, or is it one or another pseudo-topology ? \\ And the answer is simple indeed : the rigid HKB concept of topology may be just as little unique, as that of Geometry proved to be two centuries earlier ... \\
Regarding Mathematics in general, fortunately, two possible further developments, away from that G\"{o}delian limitation of all axiomatic theories, have recently appeared, even if they are not yet clearly enough in the general mathematical awareness. Namely, {\it self-referential} axiomatic mathematical theories, and perhaps even more surprisingly, {\it inconsistent} axiomatic mathematical theories, [14]. \\ \\
{\bf 1. Rings Are Non-Rigid Structures} \\
In Group Theory, given any group $( G, \diamond )$, be it commutative or not, all the concepts of the theory will in the last analysis be uniquely defined by the set $G$ and the binary operation $\diamond$. \\
On the other hand, in Ring Theory, given a ring $( R, +, .)$, the binary operation $''+''$ of addition, does {\it not} in general determine uniquely the binary operation $''.''$ of multiplication. Instead, they are only supposed to satisfy certain {\it compatibility} relations, namely, the distributivity of multiplication with respect to addition. \\
And rather simple examples show that, given a commutative group, there can be more than one ring multiplication on it. \\
Indeed, let $\mathbb{M}\,^n ( \mathbb{R} )$, with $n \geq 2$, be the set of $n \times n$ matrices of real numbers, and consider on it the commutative group structure given by the usual addition $''+''$ of matrices. \\
The following two {\it different} ring structures can be defined on $\mathbb{M}\,^n ( \mathbb{R} )$. \\
First, let $( \mathbb{M}\,^n ( \mathbb{R} ), +, . )$, where $''.''$ is the usual noncommutative multiplication of square matrices. Second, let $( \mathbb{M}\,^n ( \mathbb{R} ), +, \ast )$ where $''\ast''$ is the term by term multiplication of matrices, namely, given the matrices \\
$~~~~ A = ( a_{i, \, j} ~|~ 1 \leq i, j \leq n ),~~ B = ( b_{i, \, j} ~|~ 1 \leq i, j \leq n ) $ \\
we have $A . B = C$, where \\
$~~~~ C = ( c_{i, \, j} ~|~ 1 \leq i, j \leq n ) $ \\
with \\
$~~~~ c_{i, \, j} = a_{i, \, j} . b_{i, \, j} $ \\
And these two ring structures are indeed different, although their underlying commutative group structure is the same. For instance, the first one is noncommutative, while the second one is commutative. Furthermore, the first one has as unit element the square matrix with the diagonal 1, and with all the other elements 0, while the unit element in the second one is the matrix with all the elements 1. \\
Consequently, Ring Theory is indeed {\it non-rigid}. \\ \\
{\bf 2. The Ring $\mathbb{Z}$ Is Nearly Rigid} \\
Let $''+''$ denote the usual addition on $\mathbb{Z}$ while $''.''$ denotes the usual multiplication on it. Further, for a given integer \\
(2.1)~~~~ $ a \in \mathbb{Z} $ \\
let us consider on $\mathbb{Z}$ the binary operation $''\ast''$ defined by \\
(2.2)~~~~ $ n \ast m = a . n . m,~~~~ n, m \in \mathbb{Z} $ \\
{\bf Lemma 2.1.} \\
$( \mathbb{Z}, +, \ast )$ is a {\it commutative ring}. \\
{\bf Proof.} \\
We have for $n, m, k \in \mathbb{Z}$ the relations \\
$~~~~ n \ast ( m \ast k) = a . n . ( m \ast k ) = a . n . ( a . m . k ) = a . a . n . m . k $ \\
while \\
$~~~~ ( n \ast m ) \ast k = a . ( n \ast m ) . k = a . ( a . n . m ) . k = a . a . n . m . k $ \\
thus the associativity of $''\ast''$. Also we have the relations \\
$~~~~ n \ast ( m + k ) = a . n . ( m + k ) = ( a . n . m ) + ( a . n . k ) = ( n \ast m ) + ( n \ast k ) $ \\
hence the distributivity of $''\ast''$ with respect to $''+''$.
$\Box$ \\
Obviously, if $a = 1$, then $( \mathbb{Z}, +, \ast )$ is the usual ring $( \mathbb{Z}, +, . )$. On the other hand, if $a = -1$, then $( \mathbb{Z}, +, \ast )$ has the somewhat surprising multiplication rule \\
(2.3)~~~~ $ n \ast m = - n . m,~~~~ n, m \in \mathbb{Z} $ \\
and we shall call this the {\it alternate} ring of the usual ring $( \mathbb{Z}, +, . )$. \\
{\bf Lemma 2.2.} \\
$( \mathbb{Z}, +, \ast )$ is a {\it unital} ring, if and only if $a = \pm 1$, in which case it reduces to the usual ring $( \mathbb{Z}, +, . )$, or to its alternate, see (2.3). \\
{\bf Proof.} \\
Let $u \in \mathbb{Z}$, such that \\
$~~~~ u \ast n = n,~ n \in \mathbb{Z} $ \\
then \\
$~~~~ a . u . n = n,~ n \in \mathbb{Z} $ \\
thus in particular \\
$~~~~ a . u = 1 $ \\
which, in view of (2.1), means that \\
$~~~~ a = u = 1 $, ~or~ $ a = u = -1 $
$\Box$ \\
Recalling now (2.1), (2.2), we obtain \\
{\bf Theorem 2.1.} \\
All the commutative ring structures on the commutative group $( \mathbb{Z}, + )$ are of the form $( \mathbb{Z}, +, \ast )$, for suitable $a \in \mathbb{Z}$. \\
{\bf Proof.} \\
Let be given any commutative ring $( \mathbb{Z}, +, \circ )$, then we denote \\
(2.4)~~~~ $ a = 1 \circ 1 $ \\
Let now $n, m \in \mathbb{Z},~ n, m > 0$, then \\
(2.5)~~~~ $ n \circ m = a . n . m $ \\
Indeed \\
$~~~~ \begin{array}{l}
1 \circ m = 1 \circ ( 1 + \ldots + 1 ) = a + \ldots + a = a . m \\ \\
n \circ m = ( 1 + \ldots + 1 ) \circ m =
1 \circ m + \ldots + 1 \circ m = a . m + \ldots + a . m = a . n . m
\end{array} $ \\ \\
{\bf Remark 2.1.} \\
1. As noted in section 1, Ring Theory is non-rigid, since the commutative group structure of a ring does not in general determine uniquely the ring multiplication. \\
However, as seen in Theorem 2.1. above, the usual commutative group structure on $\mathbb{Z}$ does determine the commutative multiplication on it, except for a constant factor in (2.1), (2.2). \\
As also seen in Lemma 2.1. above, the usual commutative group structure on $\mathbb{Z}$ does further determine the commutative multiplication on it in case this multiplication has a unit element, except for the possibility of the alternate ring structure. \\
2. The fact that on such a small set like $\mathbb{Z}$, small in the sense of having the smallest infinite cardinal, there are not many significantly different ring structures on its usual commutative group need not come as a surprise. Indeed, in [9,11] it was shown that on $\mathbb{N}$ there are few associative binary operations which satisfy some rather natural and mild conditions. \\
On the other hand, if we consider the commutative group $( \mathbb{M}\,^n ( \mathbb{Q} ), + )$ of $n \times n$ matrices of rational numbers with the usual addition of matrices, then as seen in section 1, two different ring structures can be associated with that group. Yet the set $\mathbb{M}\,^n ( \mathbb{Q} )$ has also the smallest infinite cardinal. It may therefore be the case that in the mentioned result in [9,11] the usual linear order on $\mathbb{Z}$, and thus induced on $\mathbb{N}$ as well, an order missing on $\mathbb{M}\,^n ( \mathbb{Q} )$, plays a role. \\
3. Obviously, the construction in (2.1), (2.2) can be applied to an arbitrary ring, and the result in Lemma 2.1. will still hold. \\
In particular, the {\it alternate} in (2.3) can be defined far arbitrary rings. \\
As for the result in Lemma 2.2., its proof uses the fact that in a unital ring $( R, +, . )$, the equation \\
(2.6)~~~~ $ a . u = 1,~~~~ a, u \in R $ \\
should only have the solutions \\
(2.7)~~~~ $ a = u = \pm 1 $ \\
Thus we have in general \\
{\bf Lemma 2.3.} \\
Given a unital ring $( R, +, . )$ with property (2.6), (2.7). If $a \in R$, then the ring $( R, +, \ast )$ obtained through (2.1), (2.2) is unital, if and only if $a = \pm 1$, thus it reduces to $( R, +, . )$, or to its alternate. \\
\end{document}
|
arXiv
|
\begin{document}
\title{Banach spaces with many boundedly complete basic sequences failing PCP}
\author{Gin\'{e}s L\'{o}pez P\'{e}rez} \address{Universidad de Granada, Facultad de Ciencias. Departamento de An\'{a}lisis Matem\'{a}tico, 18071-Granada (Spain)} \email{[email protected]}
\thanks{Partially supported by MEC (Spain) Grant MTM2006-04837 and Junta de Andaluc\'{\i}a Grants FQM-185 and Proyecto de Excelencia P06-FQM-01438.} \subjclass{46B20, 46B22. Key words: point of continuity property, boundedly complete sequences, supershrinking sequences.} \maketitle \markboth{Gin\'{e}s L\'{o}pez P\'{e}rez}{Boundedly complete sequences and PCP} $$\parbox{3cm}
To\ my\ mother\ Francisca\ and\ my\ sister\ Isabel,\ in\ memoriam $$ \begin{abstract}
We prove that there exist Banach spaces not containing $\ell_1$, failing the point of continuity property and satisfying that every semi-normalized basic sequence has a boundedly complete basic subsequence. This answers in the negative the problem of the Remark 2 in \cite{R1}.
\end{abstract}
\section{Introduction} \par
Recall that a Banach space is said to have the point of continuity property (PCP) provided every non-empty closed and bounded subset admits a point of continuity of the identity map from the weak to norm topologies. It is known that Banach spaces with Radon-Nikodym property, including separable dual spaces, satisfy PCP, but the converse is false (see \cite{BR}). The PCP has been characterized for separable Banach spaces in \cite{BR} and \cite{GM}, and this characterization implies that Banach spaces with PCP have many boundedly complete basic sequences, and so many subspaces which are separable dual spaces. As PCP is separably determined \cite{B}, that is, a Banach space satisfies PCP if every separable subspace has PCP, it is natural looking for a sequential characterization of PCP. In this sense, it has been proved in \cite{R1} that every semi-normalized basic sequence in a Banach space with PCP has a boundedly complete subsequence. The converse of the above result is false in general, but it is open for Banach spaces not containing $\ell_1$ (see Remark 2 in \cite{R1}). The goal of this note is to prove in corollary \ref{fin} that there exist Banach spaces failing PCP and not containing $\ell_1$ such that e\-ver\-y semi-normalized basic sequence has a boundedly complete subsequence. Concretely, the space $B_\infty$, the natural predual of the space $JT_\infty$, constructed in \cite{GM} is the desired example.
We begin with some notation and preliminaries. Let $X$ be a Banach space and let $\{e_n\}$ be a basic sequence in $X$. $\{e_n\}$ is said to be semi-normalized if $0<\inf_n\Vert e_n\Vert\leq \sup_n\Vert e_n\Vert <+\infty$, $X^*$ denotes the topological dual of $X$ and the closed linear span of $\{e_n\}$ is denoted by $[e_n]$. $\{e_n\}$ is called\begin{enumerate} \item[i)] {\it boundedly complete} provided whenever scalars $\{\lambda_i\}$ satisfy\break $\sup_n\Vert\sum_{i=1}^n\lambda_ie_i\Vert<+\infty$, then $\sum_n\lambda_ne_n$ converges. \item[ii)] {\it shrinking} if the scalar sequence $\{\Vert f_{\mid [e_n, e_{n+1}, \ldots]}\Vert\}$ converges to zero $\forall f\in X^*$. \item[iii)] {\it supershrinking} provided $\{e_n\}$ is shrinking and whenever scalars $\{\lambda_i\}$ satisfy $\sup_n\Vert\sum_{i=1}^n\lambda_ie_i\Vert<+\infty$ and $\{\lambda_i\}\to 0$, then $\sum_n\lambda_ne_n$ converges. \item[iv)] {\it strongly summing} provided is a weakly Cauchy sequence and when\-ever scalars $\{\lambda_i\}$ satisfy $\sup_n\Vert\sum_{i=1}^n\lambda_ie_i\Vert<+\infty$, then $\sum_n\lambda_n$ converges. \end{enumerate}
A boundedly complete basic sequence spans a dual space and a shrinking basic sequence $\{e_n\}$ spans a subspace whose dual has a basis $\{f_n\}$, called the sequence of associated functionals to $\{e_n\}$. A boundedly complete and shrinking basic sequence spans a reflexive subspace and a basic sequence in a reflexive space is both boundedly complete and shrinking \cite {LZ}.
The supershrinking basic sequences appear in \cite{G1} and \cite{G2}, where it is proved that a Banach space $X$ with a supershrinking basis not containing $c_0$ is somewhat order one quasireflexive, whenever $X$ not contains isomorphic subspaces to $c_0$. Then $X$ has many boundedly complete basic sequences. The space $B_\infty$ has a supershrinking basis (see \cite{G1} and theorem IV.2 in \cite{GM}), not contains $c_0$ and fails PCP \cite{GM}, so $B_{\infty}$ is a good candidate to be the desired example. Other examples with a supershrinking basis are $c_0$ and $B$, the natural predual of James tree space $JT$ \cite{GM}. It is worth to mention that a semi-normalized basis of a Banach space $X$ is supershrinking if and only if \begin{equation}\lbl{igualdad} \{x^{**}\in X^{**}:\lim_nx^{**}(f_n)=0\}=X \end{equation}
where $\{f_n\}$ is the associated functional sequence \cite{G1}.
The strongly summing basic sequences appear in \cite{R2}, where it is proved the remarkable $c_0-$theorem, which assures that every weak Cauchy non-trivial sequence in a Banach space not containing $c_0$, has a strongly summing basic subsequence. A weak Cauchy sequence in a Banach space is said to be non-trivial if does not converge weakly. Finally, we recall that if $\{e_n\}$ is a strongly summing sequence, then $\{v_n\}$ is a basic sequence, where $\{v_n\}$ is the diference sequence of $\{e_n\}$, that is, $v_1=e_1$ and $v_n=e_n-e_{n-1}$ for $n>1$ (\cite{R2}).
There is a very easy connection between supershrinking, strongly summing and boundedly complete basic sequences, which implicitly appears in \cite{R1}. We give it here for sake of completeness.
\begin{lemma}\lbl{diferencias} Let $\{e_n\}$ a semi-normalized strongly summning basic sequence with diference sequence $\{v_n\}$. If $\{v_n\}$ is supershrinking, then $\{e_n\}$ is boun\-ded\-ly complete. In fact, $[e_n]$ is order one quasireflexive, that is, $[e_n]$ has codimension $1$ in $[e_n]^{**}$\end{lemma}
\begin{proof} Let $\{\lambda_n\}$ be scalars so that $\sup_n\Vert\sum_{i=1}^n\lambda_1e_i\Vert<+\infty$. We have to prove that $\sum_n\lambda_ne_n$ converges in order to obtain that $\{e_n\}$ is boundedly complete. As $\{e_n\}$ is strongly summing, hence $\sum_n\lambda_n$ converges. Define $\mu_n=\sum_{i=n}^{+\infty}\lambda_i$ $\forall n$. Then $\{\mu_n\}$ converges to zero and \begin{equation}\lbl{diferencia} \sum_{i=1}^n\mu_iv_i=\sum_{i=1}^{n-1}\lambda_ie_i+\mu_ne_n\ \forall n\in {\mathbb N} \end{equation} So, $\sup_n\Vert\sum_{i=1}^n\mu_iv_i\Vert<+\infty$ and then $\sum_n\mu_nv_n$ converges, by hypothesis. Finally, $\sum_n\lambda_ne_n$ converges by \ref{diferencia}, since $\{\mu_n\}\to 0$.
Now, we conclude that $[e_n]$ is order one quasireflexive. For this, put $e_n^*=v_n^*-v_{n+1}^*$, where $\{v_n^*\}$ is the associated functional sequence to $\{v_n\}$. Then $\{e_n^*\}$ is the associated functional sequence to $\{e_n\}$. Observe that $[e_n]^*=[v_n^*]$, since $\{v_n\}$ is shrinking. Hence, $[e_n^*]$ has codimension $1$ in $[e_n]^*$, since $x^{**}(e_n^*)=0$ for every $n$ and $x^{**}(v_1^*)=1$, where $x^{**}(x^*)=\lim_n x^*(e_n)$ for every $x^*\in [e_n]^*$ exists because $\{e_n\}$ is weakly Cauchy. In fact, $[e_n]^*=[e_n^*]\oplus [v_1^*]$. But $[e_n^*]^*$ is canonically isomorphic to $[e_n]$, since $\{e_n\}$ is a boundedly complete sequence. Then $[e_n]$ has codimension $1$ in $[e_n]^{**}$.\end{proof}
\section{Main results} \par
The corollary \ref{fin} announced in the introduction will be deduced from the following more general result.
\begin{theorem}\lbl{teorema} Let $X$ be a Banach space with a semi-normalized supershrinking basis, not containing $c_0$. Then every non-trivial weak Cauchy sequence has a boundedly complete basic subsequence. \end{theorem}
Before prove this theorem, we need the following stability property of supershrinking basic block sequences.
\begin{lemma}\lbl{bloque} Let $X$ be a Banach space with a semi-normalized supershrinking basis $\{e_{n}\}$. If $v_{n}=\sum_{k=\sigma(n-1)+1}^{\sigma(n)}\lambda_{k}e_{k}$ is a basic block of $\{e_{n}\}$ with $\{\lambda_{n}\}$ bounded, then $\{v_n\}$ is a supershrinking basic sequence.\end{lemma}
\begin{proof} Let $\{f_n\}$, $\{g_{n}\}$ be the sequences of associated functionals to $\{e_n\}$ and $\{v_{n}\}$, respectively. Then $f_{k}=\lambda_{k}g_{n}$ whenever $\sigma(n-1)+1\leq k\leq \sigma(n)$. In order to show that $\{v_n\}$ is a supershrinking basic sequence we check the equality \ref{igualdad}.
Pick $y^{**}\in [v_n]^{**}$ with $\lim_{n}y^{**}(g_{n})=0$ then $\lim_{n}y^{**}(f_{n})=0$, since $\{\lambda_{n}\}$ is bounded. So, $y^{**}\in X$ and $\{v_n\}$ is supershrinking.\end{proof}
Now, we show that Banach spaces with a supershrinking basis without copies of $c_0$ contain many reflexive subspaces.
\begin{proposition}\lbl{elton} Let $X$ be a Banach space with a semi-normalized supershrinking basis $\{e_{n}\}$ without isomorphic subspaces to $c_{0}$. Then every subsequence of $\{e_{n}\}$ has a further subsequence whose closed linear span is a reflexive subspace. \end{proposition}
\begin{proof} It is clear that it is enough to prove that $\{e_{n}\}$ has a subsequence whose closed linear span is a reflexive subspace.
For this, we apply the Elton Theorem \cite{D} to obtain $\{e_{\sigma(n)}\}$ a basic subsequence of $\{e_{n}\}$ such that $$\lim_{k}\Vert \sum_{i=1}^{k}a_{i}e_{\sigma(i)}\Vert =+\infty\ \forall \{a_{i}\}\notin c_{0}.$$ We put $Y=[e_{\sigma(n)}]$. To see that $Y$ is reflexive it suffices to prove that $\{e_{\sigma(n)}\}$ is a boundedly complete basic sequence in $Y$, since $\{e_{\sigma(n)}\}$ is a shrinking basic sequence.
Let $\{\lambda_{n}\}\subset {\mathbb R}$ such that $\sup_{n}\Vert \sum_{k=1}^{n}\lambda_{k}e_{\sigma(k)}\Vert <+\infty$. Then $\{\lambda_{n}\}\in c_{0}$ and $\sum_{n}\lambda_{n}e_{\sigma(n)}$ converges, since $\{e_{\sigma(n)}\}$ is supershrinking, that is $Y$ is reflexive.\end{proof}
\begin{proof} {\it of theorem} \ref{teorema}. Let $\{f_n\}$ be the functional sequence associated to $\{e_n\}$ and assume, without loss of generality that $\{e_n\}$ is monotone, that is, $\Vert Q_n\Vert\leq 1$ $\forall n\in {\mathbb N}$, where $\{Q_n=\sum_{k=1}^nf_k\}$ is the sequence of the projections of the basis $\{e_n\}$. Put $M=\sup_n\Vert e_n\Vert$ and let $\{x_n\}$ be a non-trivial weak Cauchy in $X$. By the $c_0$-theorem, we can assume that there is a strongly summing basic subsequence of $\{x_n\}$, so we in fact assume that $\{x_n\}$ itself is a non-trivial weak Cauchy strongly summing basic sequence.
We claim that there exist integers $0<\sigma(1)<\sigma(2)<\ldots$, $0=m_0<1=m_1<m_2<\ldots$ and $\{v_n\}$ a basic sequence such that\begin{enumerate} \item[i)] \begin{equation} \vert f_h(x_{\sigma(n)})-f_h(x_k)\vert<\frac{1}{2^{n+3}m_{n}M}\ \forall k\geq \sigma(n),\ h\leq m_{n},\ n\in {\mathbb N} \end{equation} \item[ii)]$v_n\in [e_k:m_{n-1}+1\leq k\leq m_{n+1}]\ \forall n\in {\mathbb N}$ \item[iii)] $\Vert v_n-z_n\Vert<1/2^{n+1} \forall n\in {\mathbb N}$,\end{enumerate} where $\{z_n\}$ is the diference sequence of $\{x_{\sigma(n)}\}$, that is, $z_1=x_{\sigma(1)}$,\break $z_n=x_{\sigma(n)}-x_{\sigma(n-1)}$ for all $n>1$.
As $\{x_n\}$ is weakly Cauchy, there is $\sigma(1)\in {\mathbb N}$ such that \begin{equation}\lbl{1}\vert f_1(x_{\sigma(1)})-f_1(x_k)\vert <1/2^4M\ \forall k\geq \sigma(1).\end{equation}
Choose $m_2>m_1$ such that $\Vert \sum_{n=m_2+1}^{+\infty}f_n(x_{\sigma(1)})e_n\Vert<1/2^2$ and put\break $v_1=\sum_{n=1}^{m_2}f_n(x_{\sigma(1)})e_n$. Then $\Vert z_1-v_1\Vert=\Vert \sum_{n=m_2+1}^{+\infty}f_n(x_{\sigma(1)})e_n\Vert<1/2^2$.
Pick now $\sigma(2)>\sigma(1)$ such that \begin{equation}\lbl{2} \vert f_h(x_{\sigma(2)})-f_h(x_k)\vert<\frac{1}{2^5m_2M}\ \forall k\geq\sigma(2),\ h\leq m_2 \end{equation} Chose $m_3>m_2$ such that $\Vert \sum_{n=m_3+1}^{+\infty}(f_n(x_{\sigma(2)})-f_n(x_{\sigma(1)}))e_n\Vert<1/2^4$.
Put now $v_2=\sum_{n=m_1+1}^{m_3}(f_n(x_{\sigma(2)})-f_n(x_{\sigma(1)}))e_n$. Then $\Vert z_2-v_2\Vert\leq \Vert (f_1(x_{\sigma(2)})-f_1(x_{\sigma(1)}))e_1\Vert +\Vert\sum_{n=m_3+1}^{+\infty}(f_n(x_{\sigma(2)})-f_n(x_{\sigma(1)}))e_n\Vert<1/2^4+1/2^4=1/2^3$, by \ref{1} and \ref{2}.
Assume, inductively, that $m_2<m_3<\ldots<m_{n+1} $, $\sigma(2)<\sigma(3)<\ldots<\sigma(n)$, $v_1,v_2,\ldots,v_n$ have been constructed such that \begin{equation}\lbl{3} \vert f_h(x_{\sigma(n)})-f_h(x_k)\vert<\frac{1}{2^{n+3}m_{n}M}\ \forall k\geq \sigma(n),\ h\leq m_{n} \end{equation} Pick now $m_{n+2}>m_{n+1}$ such that \begin{equation}\lbl{4} \Vert\sum_{n=m_{n+2}+1}^{+\infty}(f_n(x_{\sigma(n+1)})-f_n(x_{\sigma(n)}))e_n\Vert<1/2^{n+3}. \end{equation} Put $v_{n+1}=\sum_{i=m_n+1}^{m_{n+2}}(f_i(x_{\sigma(n+1)})-f_i(x_{\sigma(n)}))e_i$. Then $\Vert z_{n+1}-v_{n+1}\Vert\leq \Vert\sum_{i=1}^{m_n}(f_i(x_{\sigma(n+1)})-f_i(x_{\sigma(n)}))e_i\Vert+ \Vert\sum_{i=m_{n+2}+1}^{+\infty}(f_i(x_{\sigma(n+1)})-f_i(x_{\sigma(n)}))e_i\Vert<1/2^{n+3}+1/2^{n+3}=1/2^{n+2}$, by \ref{3} and \ref{4}.
Now, choose $\sigma(n+1)>\sigma(n)$ such that \begin{equation} \vert f_h(x_{\sigma(n+1)})-f_h(x_k)\vert<\frac{1}{2^{n+4}m_{n+1}M}\ \forall k\geq \sigma(n+1),\ h\leq m_{n+1}. \end{equation} Then the induction is complete and the claim is proved.
From the claim, it is clear that $\{v_n\}$ is a basic sequence equivalent to $\{z_n\}$, the diference sequence of $\{x_{\sigma(n)}\}$, since $\sum_{n=1}^{+\infty}\Vert z_n-v_n\Vert<1/2$ (see proposition 1.a.9 in \cite{LZ}). Also, we obtain that $[v_n,v_{n+1},\ldots]\subset[e_{m_{n-1}+1},e_{m_{n-1}+2},\ldots]$ $\forall n\in {\mathbb N}$, so $\{v_n\}$ is a shrinking basic sequence, since $\{e_n\}$ it is.
Now, let us see that $\{v_n\}$ is a supershrinking basic sequence. For this, we chose $\{\lambda_n\}$ a scalar sequence such that $\sup_n\Vert\sum_{k=1}^n\lambda_kv_k\Vert<+\infty$ and we have to prove that $\sum_n\lambda_nv_n$ converges, whenever $\{\lambda_n\}\to 0$.
From the proof of the claim $v_1=\sum_{n=1}^{m_2}f_n(x_{\sigma(1)})e_n$, and for every $n>1$ $v_n=\sum_{k=m_{n-1}+1}^{m_{n+1}}(f_k(x_{\sigma(n)})-f_k(x_{\sigma(n-1)}))e_k$.
Put $\mu_i=\lambda_1f_i(x_{\sigma(1)})$ for $1\leq i\leq m_1$, $\mu_i=\lambda_1f_i(x_{\sigma(1)})+\lambda_2(f_i(x_{\sigma(2)})-f_i(x_{\sigma(1)}))$ for $m_1+1\leq i\leq m_2$ and $\mu_i=\lambda_{k-1}(f_i(x_{\sigma(k-1)})-f_i(x_{\sigma(k-2)}))+\lambda_k(f_i(x_{\sigma(k)})-f_i(x_{\sigma(k-1)}))$ for $m_{k-1}+1\leq i\leq m_k$ and $k>2$.
As $\{\lambda_n\}\to 0$, $\{e_n\}$ is a seminormalized basis of $X$ and $\{x_n\}$ is bounded, we deduce that $\{\mu_n\}\to 0$. Furthermore, we have the following equality for all $n\in {\mathbb N}$: \begin{equation} \sum_{k=1}^{n}\lambda_kv_k=\sum_{k=1}^{m_n}\mu_ke_k+\sum_{k=m_n+1}^{m_{n+1}}\lambda_n(f_k(x_{\sigma(n)})-f_k(x_{\sigma(n-1)}))e_k \end{equation} Hence, whenever $m_n+1\leq p<m_{n+1},\ n>1$ we have \begin{equation}\lbl{chorizo}
\sum_{k=1}^p\mu_ke_k=\sum_{k=1}^{n}\lambda_kv_k+\sum_{k=m_n+1}^{p}\lambda_{n+1}(f_k(x_{\sigma(n+1)})-f_k(x_{\sigma(n)}))e_k \end{equation} $$-\sum_{k=p+1}^{m_{n+1}}\lambda_{n}(f_k(x_{\sigma(n)})-f_k(x_{\sigma(n-1)}))e_k$$
Now, as $\{x_n\}$ and $\{Q_n\}$ are bounded and $\{\lambda_n\}\to 0$, we obtain that \begin{equation}\lbl{cero} \lim_n\sum_{k=p+1}^{m_{n+1}}\lambda_{n}(f_k(x_{\sigma(n)})-f_k(x_{\sigma(n-1)}))e_k= \end{equation} $$\lim_n\sum_{k=m_n+1}^{p}\lambda_{n+1}(f_k(x_{\sigma(n+1)})-f_k(x_{\sigma(n)}))e_k=0,$$ since for every $m_n+1\leq p<m_{n+1}$, $n\in {\mathbb N},\ n>1$ we have: \begin{equation}\lbl{proj}\sum_{k=p+1}^{m_{n+1}}\lambda_{n}(f_k(x_{\sigma(n)})-f_k(x_{\sigma(n-1)}))e_k =\lambda_n(Q_{m_{n+1}}-Q_p)(x_{\sigma(n)}-x_{\sigma(n-1)}),\end{equation} $$\sum_{k=m_n+1}^{p}\lambda_{n+1}(f_k(x_{\sigma(n+1)})-f_k(x_{\sigma(n)}))e_k= \lambda_{n+1}(Q_p-Q_{m_n})(x_{\sigma(n+1)}-x_{\sigma(n)})$$ From \ref{chorizo} and \ref{proj}, it can be deduced that $\sup_p\Vert\sum_{n=1}^p\mu_ne_n\Vert<+\infty$ and so, $\sum_n\mu_ne_n$ converges, since $\{\mu_n\}\to 0$ and $\{e_n\}$ is supershrinking. Then $\sum_n\lambda_nv_n$ converges by \ref{chorizo} and \ref{cero} and we have proved that $\{v_n\}$ is a supershrinking basic sequence equivalent to the diference sequence of $\{x_{\sigma(n)}\}$. Finally, $\{x_{\sigma(n)}\}$ is boundedly complete by lemma \ref{diferencias}, since it is strongly summing. In fact, $[x_{\sigma(n)}]$ is order one quasireflexive, by lemma \ref{diferencias}.\end{proof}
\begin{corollary}\lbl{fin} Let $X$ be a Banach space with a semi-normalized supershrinking basis not containing $c_0$. Then every semi-normalized basic sequence in $X$ has a boundedly complete subsequence spanning a reflexive or an order one quasireflexive subspace of $X$. \end{corollary}
\begin{proof} Let $\{x_n\}$ a semi-normalized basic sequence in $X$. As $X$ not contains isomorphic subspaces to $\ell_1$, we can assume that $\{x_n\}$ itself is weakly Cauchy, by the $\ell_1-$theorem \cite{R3}. If $\{x_n\}$ is not weakly convergent, then $\{x_n\}$ is a semi-normalized non-trivial weak Cauchy sequence and $\{x_n\}$ has a boundedly complete subsequence spanning an order one quasireflesive subspace, by theorem \ref{teorema}, and we are done.
If $\{x_n\}$ is weakly convergent, then $\{x_n\}$ converges weakly to zero, because $\{x_n\}$ is a basic sequence. Now, it is straightforward construct a subsequence of $\{x_n\}$ equivalent to a basic block of the basis. So, we can assume that $\{x_n\}$ is a semi-normalized basic sequence equivalent to a basic block of the basis. Following the proof of proposition 1.a.11 in \cite{LZ}, it t is easy to construct this basic block satisfying the hypothesis of lemma \ref{bloque}. Then $\{x_n\}$ is a supershrinking basic subsequence and, by proposition \ref{elton}, $\{x_n\}$ has a boundedly complete subsequence spanning a reflexive subspace, so we are done.\end{proof}
As we announced in the introduction, it is enough apply the corollary \ref{fin} to obtain the following
\begin{corollary}\lbl{qed} $B_{\infty}$ fails PCP, not contains isomorphic subspaces to $\ell_1$ and every semi-normalized basic sequence in $B_{\infty}$ has a boundedly complete subsequence spanning a reflexive or an order one quasireflexive subspace. \end{corollary}
\begin{proof} The fact that $B_{\infty}$ has a semi-normalized supershrinking basis is consequence of theorem IV.2 in \cite{GM}. So $B_{\infty}$ has separable dual and not contains subspaces isomorphic to $\ell_1$. Now, $B_{\infty}$ fails PCP and not contains subspaces isomorphic to $c_0$ \cite{GM}. Finally, by corollary \ref{fin}, every semi-normalized basic sequence in $B_{\infty}$ has a boundedly complete subsequence spanning a reflexive or an order one quasireflexive subspace of $B_{\infty}$.\end{proof}
Let $B$ be the natural predual of James tree space $JT$. It is known that $B$ satisfies PCP, and also $B$ has a semi-normalized supershrinking basis. (See \cite{LS} and \cite{G2}). As $B$ not contains isomorphic subspaces to $c_0$, \cite{LS}, we can apply the corollary \ref{fin}, as in corollary \ref{qed}, to obtain the following
\begin{corollary} Every semi-normalized basic sequence in $B$ has a boundedly complete subsequence spanning a reflexive or an order one quasireflexive subspace of $B$.\end{corollary}
\begin{remark}i) It has been proved in \cite{DF} that a Banach space $X$ with se\-pa\-rable dual satisfies PCP if, and only if, every weakly null tree in the unit sphere of $X$ has a boundedly complete branch, which can be easily deduced from \cite{G2}. Also, it is shown in \cite{DF} that this characterization of PCP is not true for sequences, by proving that every weakly null sequence in the unit sphere of $B_{\infty}$ has a boundedly complete subsequence, while $B_{\infty}$ fails PCP. Hence the corollary \ref{qed} improves this result, since every weakly null sequence in the unit sphere of a Banach space has a semi-normalized basic subsequence.
ii) From corollary \ref{fin} one might think that the good sequential property in order to imply PCP for Banach spaces with separable dual is that e\-ve\-ry semi-normalized basic sequence has a subsequence spanning a reflexive subspace. And this is true, but this property implies reflexivity. Indeed, assume that $X$ is a Banach space satisfying that every semi-normalized basic sequence has a subsequence spanning a reflexive subspace. Take a bounded sequence $\{x_n\}$ in $X$ and prove that $\{x_n\}$ has a weakly convergent subsequence. As $X$ not contains subspaces isomorphic to $\ell_1$, then $\{x_n\}$ has a weak Cauchy subsequence $\{y_n\}$, by the $\ell_1-$theorem. If $\{y_n\}$ is not semi-normalized, then $\{y_n\}$ and so $\{x_n\}$ has a subsequence weakly convergent to zero and we are done. Hence, assume that $\{y_n\}$ is a semi-normalized weak Cauchy sequence in $X$. If $\{y_n\}$ is not weakly convergent, then, by the $c_0-$theorem, for example, $\{y_n\}$ has a semi-normalized basic subsequence, since $X$ not contains isomorphic subspaces to $c_0$. By hypo\-the\-sis, $\{y_n\}$ has a subsequence spanning a reflexive subspace and hence, this subsequence is weakly convergent to zero, so $\{x_n\}$ has a weakly convergent subsequence and we are done.
iii) It is known that $B_{\infty}$ satisfies the convex point of continuity property CPCP \cite{GM}, a weaker property than PCP. So it is natural to ask weather a Banach space satisfies CPCP, whenever every semi-normalized basic sequence has a boundedly complete subsequence.\end{remark}
\end{document}
|
arXiv
|
Heterologous expression of genes for bioconversion of xylose to xylonic acid in Corynebacterium glutamicum and optimization of the bioprocess
M. S. Lekshmi Sundar1,2,
Aliyath Susmitha1,2,
Devi Rajan1,
Silvin Hannibal3,
Keerthi Sasikumar1,2,
Volker F. Wendisch3 &
K. Madhavan Nampoothiri1,2
In bacterial system, direct conversion of xylose to xylonic acid is mediated through NAD-dependent xylose dehydrogenase (xylB) and xylonolactonase (xylC) genes. Heterologous expression of these genes from Caulobacter crescentus into recombinant Corynebacterium glutamicum ATCC 13032 and C. glutamicum ATCC 31831 (with an innate pentose transporter, araE) resulted in an efficient bioconversion process to produce xylonic acid from xylose. Process parameters including the design of production medium was optimized using a statistical tool, Response Surface Methodology (RSM). Maximum xylonic acid of 56.32 g/L from 60 g/L xylose, i.e. about 76.67% of the maximum theoretical yield was obtained after 120 h fermentation from pure xylose with recombinant C. glutamicum ATCC 31831 containing the plasmid pVWEx1 xylB. Under the same condition, the production with recombinant C. glutamicum ATCC 13032 (with pVWEx1 xylB) was 50.66 g/L, i.e. 69% of the theoretical yield. There was no significant improvement in production with the simultaneous expression of xylB and xylC genes together indicating xylose dehydrogenase activity as one of the rate limiting factor in the bioconversion. Finally, proof of concept experiment in utilizing biomass derived pentose sugar, xylose, for xylonic acid production was also carried out and obtained 42.94 g/L xylonic acid from 60 g/L xylose. These results promise a significant value addition for the future bio refinery programs.
Made C. glutamicum recombinants with genes for xylose to xylonic acid conversion.
Bioprocess development using C. glutamicum for xylonic acid.
Conversion of biomass derived xylose to xylonic acid.
D-xylonic acid, an oxidation product of xylose, is a versatile platform chemical with multifaceted applications in the fields of food, pharmaceuticals, and agriculture. It is considered by the U.S. Department of Energy to be one of the 30 chemicals of highest value because it can be used in a variety of applications, including as a dispersant, pH regulator, chelator, antibiotic clarifying agent and health enhancer (Byong-Wa et al. 2006; Toivari et al. 2012). Xylonic acid may also be used as a precursor for bio-plastic, polymer synthesis and other chemicals such as 1,2,4-butanetriol (Niu Wei et al. 2003). Although xylonic acid production is feasible via chemical oxidation using platinum or gold catalysts, selectivity is relatively poor (Yim et al. 2017). As the pentose sugar catabolism is restricted to the majority of the industrial microbes (Wisselink et al. 2009), microbial conversion of xylose to xylonic acid gained interest. As of now, biogenic production of xylonic acid has been accomplished in various microorganisms, including Escherichia coli, Saccharomyces cerevisiae and Kluyveromyces lactis by introducing xylB (encoding xylose dehydrogenase) and xylC (encoding xylonolactonase) genes from Caulobacter crescentus or Trichoderma reesei (Nygård et al. 2011; Toivari et al. 2012; Cao et al. 2013).
As xylose is the monomeric sugar required for xylonic acid production, a lot of interest has been paid on utilizing xylose generated from lignocellulosic biomass (Lin et al. 2012). Bio-transformation of lignocellulosic biomass into platform chemicals is possible only through its conversion to monomeric sugars, mostly by pretreatment, i.e. pre-hydrolysis by alkali or acid at higher temperature or via enzymatic hydrolysis. Monomeric hexose and pentose sugars are generated from lignocellulosic biomass along with inhibitory by-products like furfural, 5-hydroxymethylfurfural, 4-hydroxybenzaldehyde that affect the performance of microbial production hosts (Matano et al. 2014). The concept of biomass refinery is getting more and more attraction for the cost effectiveness of the 2G ethanol program. Microbial production of value-added products such as biopolymers, bioethanol, butanol, organic acids and xylitol were reported utilizing the C5 stream generated by the pretreatment of biomass by different microbes like Pichia stipitis, Clostridium acetobutylicum, Candida guilliermondii, Bacillus coagulans (Mussatto and Teixeira 2010; Ou et al. 2011; de Arruda et al. 2011; Lin et al. 2012; Raganati et al. 2015).
Although some of the industrial strains are capable of pentose fermentation, most of them are sensitive to inhibitors of lignocellulosic biomass pretreatment. However, Corynebacterium glutamicum showed remarkable resistance towards these inhibitory by-products under growth-arrested conditions (Sakai et al. 2007). C. glutamicum is a Gram-positive, aerobic, rod-shaped, non-spore forming soil actinomycete which exhibits numerous ideal intrinsic attributes as a microbial factory to produce amino acids and high-value chemicals (Heider and Wendisch 2015; Hirasawa and Shimizu 2016; Yim et al. 2017). This bacterium has been successfully engineered towards producing a broad range of products, including diamines, amino-carboxylic acids, diacids, recombinant proteins and even industrial enzymes (Becker et al. 2018; Baritugo et al. 2018). A lot of metabolic resurrections were reported in C. glutamicum for the production of chemicals like amino acids, sugar acid, xylitol and biopolymers from hemicellulosic biomasses such as wheat bran, rice straw and sorghum stover (Gopinath et al. 2011; Wendisch et al. 2016; Dhar et al. 2016).
Since C. glutamicum lacks the genes for the metabolic conversion of xylose to xylonic acid, the heterologous expression of xylose dehydrogenase (xylB) and xylonolactonase (xylC) genes from Caulobacter crescentus was attempted. In addition to ATCC 13032 wild type, we also explored the C.glutamicum ATCC 31831 culture which contains a pentose transporter gene (araE) which enables the uptake of pentose sugar (Kawaguchi et al. 2009; Choi et al. 2019). Both xylB and xylC genes individually, as well as together as xylBC, were amplified from xylose operon of C. crescentus and the plasmids were transformed to both C. glutamicum strains and checked the xylonic acid production.
Microbial strains and culture conditions
Microbial strains and plasmids used in this study are listed in Table 1. For genetic manipulations, E. coli strains were grown at 37 °C in Luria–Bertani (LB) medium. C. glutamicum strains were grown at 30 °C in Brain Heart Infusion (BHI) medium. Where appropriate, media were supplemented with antibiotics. The final antibiotic concentrations for E. coli and C. glutamicum were 25 μg/ml of kanamycin. Culture growth was measured spectrophotometrically at 600 nm using a UV–VIS spectrophotometer (UVA-6150, Shimadzu, Japan).
Table 1 Microbial strains, plasmids and primers used in the study
Molecular techniques and strain construction
Standard molecular techniques were done according to the protocol described by (Sambrook et al. 2006). Genomic DNA isolation was done with Gen Elute genomic DNA isolation kit (Sigma, India). Plasmid isolation was done using Qiagen plasmid midi kit (Qiagen, Germany). Polymerase chain reaction (PCR) was performed using automated PCR System (My Cycler, Eppendorff, Germany) in a total volume of 50 μl with 50 ng of DNA, 0.2 mM dNTP in PrimeSTAR™ buffer (Takara), and 1.25 U of PrimeSTAR™ HS DNA polymerase (Takara) and the PCR product was purified by QIA quick PCR purification kit (Qiagen, Germany) as per the instructions provided by the manufacturers. Competent E. coli DH5α cells were prepared by Transformation and Storage Solution (TSS) method and transformed by heat shock (Chung and Miller 1993). The C. glutamicum competent cells were electroporated to achieve the transformation (van der Rest et al. 1999).
Xylose dehydrogenase (xylB) and xylonolactonase (xylC) and xylBC genes together of Caulobacter crescentus were amplified from the xylose-inducible xylXABCD operon (CC0823–CC0819) (Stephens et al. 2007) by polymerase chain reaction (PCR) with appropriate primers as shown in Table 1 and the purified PCR products (747 bp xylB, 870 bp xylC and 1811 bp xylBC) were verified by sequencing and cloned into the restriction digestion site (Bam HI/Pst I) of pVWEx1 shuttle vector. The engineered plasmids so-called pVWEx1xylB, pVWEx1xylC and pVWEx1xylBC were transformed into E. coli DH5α and the transformants bearing pVWEx1 derivative were screened in LB medium supplemented with kanamycin (25 µg mL−1). Competent cells of C. glutamicum ATCC 13032 and ATCC 31831 were prepared and the plasmids were electroporated into both the C. glutamicum strains with parameters set at 25 μF, 600 Ω and 2.5 kV, yielding a pulse duration of 10 ms and the positive clones were selected in LBHIS kanamycin (25 µg mL−1) plates (van der Rest et al. 1999).
Fermentative production of xylonic acid by C. glutamicum transformants
For xylonic acid production, C. glutamicum was inoculated in 10 ml of liquid medium (BHI broth) in a test tube and grown overnight at 30 °C under aerobic condition with shaking at 200 rpm. An aliquot of the 10 ml culture was used to inoculate 100 ml CGXII production medium (Keilhauer et al. 1993) containing 35 g/L xylose and 5 g/L glucose as carbon sources, kanamycin (25 µg mL−1). IPTG (1 mM) induction was done along with the inoculation. Fermentation was carried out in 250 mL Erlenmeyer flasks containing 100 mL production medium and incubated as described above. Samples were withdrawn at regular intervals to determine sugar consumption and xylonic acid production. Since xylB transformant was found to be the best producer, a comparison of it with C. glutamicum ATCC 13032 having xylB gene was also carried out to see whether the inbuilt araE pentose transporter in ATCC 31831 has any advantage over wild type ATCC 13032.
Media engineering by response surface methodology (RSM)
Response surface methodology was applied to identify the operating variables that have a significant effect on xylonic acid production. A Box Behnken experimental design (BBD) (Box and Behnken 1960) with four independent variables (selected based on single parameter study, data not shown) that may affect xylonic acid production, including (NH4)2SO4 (2.5–12.5 g/L), urea (4.5–18.5 g/L), xylose (30–90 g/L) and inoculum (7.5–1.125%) were studied at three levels − 1, 0 and + 1 which correspond to low, medium and high values respectively. Responses were measured as titer (g/L) of xylonic acid. The statistical as well as numerical analysis of the model was evaluated by analysis of variance (ANOVA) which included p-value, regression coefficient, effect values and F value using Minitab 17 software. Studies were performed using C. glutamicum ATCC 31831 harboring pVWEx1-xylB.
Dilute acid pretreatment of the biomass
The rice straw was crushed into fine particle (size of 10 mm) and pre-soaked in dilute acid (H2SO4) for 30 min, pretreated with 15% (w/w) biomass loading and 1% (w/w) acid concentration at 121 °C for 1 h. After cooling, the mixture was neutralized to pH 6–7 using 10 N NaOH. The liquid portion, i.e. acid pretreated liquor (APL) rich in pentose sugar (xylose) was separated from the pretreated slurry and lyophilized to concentrate to get desired xylose level which was estimated prior to the shake flask fermentation studies.
Quantification of sugars and xylonic acid in fermentation broth
The qualitative and quantitative analysis of sugars and sugar acid (xylonic acid) was performed using an automated high-performance liquid chromatography (HPLC) system (Prominence UFLC, Shimadzu, Japan) equipped with auto-sampler, column oven and RI Detector. The monomeric sugars (xylose and glucose) were resolved with Phenomenex Rezex RPM Pb+ cation exchange monosaccharide column (300 × 7.5 mm) operated at 80 °C. MilliQ water (Millipore) with a flow rate of 0.6 mL/min was used as the mobile phase. For xylonic acid detection, Phenomenex organic acid column (250 mm × 4.6 mm × 5 µm) operated at 55 °C was used with a mobile phase of 0.01 N H2SO4 at a flow rate of 0.6 mL/min. The samples were centrifuged (13,000 rpm for 10 min at 4 °C) and filtered using 0.2 µm filters (Pall Corporation, Port Washington, New York) for analysis.
Xylose utilization and xylonic acid production by C. glutamicum transformants
Corynebacterium glutamicum recombinants expressing xylB, xylC and xylBC were constructed. The xylose dehydrogenase and xylonolactonase genes were cloned into IPTG-inducible expression vector pVWEx1 and transformed into C. glutamicum ATCC 31831. To check xylonic acid production from xylose, the C. glutamicum ATCC 31831 transformants harboring pVWEx1-xylB, pVWEx1-xylC and pVWEx1-xylBC were cultivated in CGXII medium containing 5 g/L of glucose as the carbon source for initial cell growth and 35 g/L of xylose as the substrate for xylonic acid production. Cell growth, xylose consumption and xylonic acid production were analyzed during the incubation for a desired period of interval. From analysis, it is clear that compared to the control strain with empty vector (Fig. 1a), the transformants harboring pVWEx1-xylB picked up growth very fast compared to the other transformants and utilized xylose effectively (77.2% utilization after 120 h) and resulted in maximum production of 32.5 g/L xylonic acid (Fig. 1b). The pVWEx1-xylBC harboring strain produced 26 g/L xylonic acid (Fig. 1d), whereas pVWEx1-xylC showed neither any significant xylose uptake nor xylonic acid production (Fig. 1c).
Xylose consumption (35 g/L) (closed triangle), xylonic acid production (closed circle) and growth curve (open circle) of C. glutamicum ATCC 31831 (a) pVWEx1 (b) pVWEx1-xyl B (c) pVWEx1-xylC (d) pVWEx1-xyl BC respectively
Box–Behnken experimental design (BBD) and operational parameter optimization
The objective of the experimental design was medium engineering for maximum xylonic acid production. There were a total of 15 runs for optimizing the four individual parameters in the current BBD. Experimental design and xylonic acid yield are presented in Table 2. The polynomial equation obtained for the model was as below:
$$\begin{aligned} {\text{Xylonic acid }}\left( {{{\text{g}} \mathord{\left/ {\vphantom {{\text{g}} {\text{L}}}} \right. \kern-0pt} {\text{L}}}} \right)\; = \; & - 4 8. 7 { }{-} \, 0. 4 5 {\text{ X}}_{ 1} + { 3}. 5 {\text{ X}}_{ 2} + 0. 2 20{\text{ X}}_{ 3} + { 2}.0 5 8 {\text{ X}}_{ 4} \\ & {-} \, 0.0 1 9 {\text{ X}}_{ 1}^{ 2} {-} \, 0. 2 1 3 9 {\text{ X}}_{ 2}^{ 2} {-} \, 0.0 4 2 3 {\text{ X}}_{ 3}^{ 2} {-} \, 0.0 1 9 4 3 {\text{ X}}_{ 4}^{ 2} \\ & {-} \, 0.0 7 5 {\text{ X}}_{ 1} {\text{X}}_{ 2} + \, 0.0 4 1 6 {\text{ X}}_{ 1} {\text{X}}_{ 3} {-} \, 0.0 1 1 9 {\text{ X}}_{ 1} {\text{X}}_{ 4} \\ & + \, 0. 5 2 6 {\text{ X}}_{ 2} {\text{X}}_{ 3} + \, 0.0 4 8 2 {\text{ X}}_{ 2} {\text{X}}_{ 4} {-} \, 0.00 1 2 8 {\text{ X}}_{ 3} {\text{X}}_{ 4} \\ \end{aligned}$$
where X1, X2, X3 and X4 are xylose, (NH4)2SO4, urea and inoculum concentration respectively. Maximum production efficiency (0.47 g−1 L−1 h−1) was observed with Run No.13 where the concentration of parameters was urea 11.5 g/L, xylose 60 g/L, (NH4)2SO4 7.5 g/L and inoculum 1.125% and xylonic acid titer was 56.32 g/L. It indicates that (NH4)2SO4, inoculum concentration and xylose have a significant positive effect than urea on xylonic acid yield.
Table 2 Box–Behnken experimental design matrix with experimental values of xylonic acid production by Corynebacterium glutamicum ATCC 31831
Response surface curves were plotted to find out the interaction of variables and to determine the optimum level of each variable for maximum response. The contour plot showing the interaction between a pair of factors on xylonic acid yield is given in Fig. 2a–f. Major interactions studied are of inoculum and xylose concentration (a), xylose and urea concentration (b), (NH4)2SO4 and urea concentration (c), effect of inoculum and (NH4)2SO4 concentration (d), effect of (NH4)2SO4 and xylose concentration (e) and the interaction of inoculum and urea concentration (f).
Response surface methodology-contour plots showing the effect of various parameters on xylonic acid production by C.glutamicum ATCC 31831. a Effect of inoculum and xylose. b Effect of xylose and urea. c Effect of (NH4)2SO4 and urea. d Effect of inoculum and (NH4)2SO4. e Effect of (NH4)2SO4 and xylose f Effect of inoculum and urea
The ANOVA of response for xylonic acid is shown in Table 3. The R2 value explains the variability in the xylonic acid yield associated with the experimental factors to the extent of 97.48%.
Table 3 Analysis of variance for xylonic acid production using C. glutamicum ATCC 31831
Role of araE pentose transporter for enhanced uptake of xylose and xylonic acid production
Using the designed medium standardized for C. glutamicum ATCC 31831, which possesses an arabinose and xylose transporter encoded by araE, a comparative production study was carried out with recombinant C. glutamicum ATCC 13032. Both the strains grew well in the CGXII production medium and metabolized xylose to xylonic acid. After 120 h fermentation, the recombinant strain, ATCC 13032 produced 50.66 g/L of xylonic acid whereas ATCC 31831 produced 56.32 g/L (Fig. 3). It was observed that better uptake of the pentose sugar was also exhibited by C. glutamicum ATCC 31831, i.e., 75% consumption compared to 60% by ATCC 13032 after 120 h fermentation and same the case with culture growth where ATCC 31831 showed better growth (10× dilution of culture broth for spectrophotometric reading (Additional file 1: Figure S1).
Xylonic acid production by C. glutamicum ATCC 13032 (open bar) and C. glutamicum ATCC 31831 (closed bar) harbouring plasmid pVWEx1-xylB
Xylonic acid from rice straw hydrolysate
Fermentation was carried out in rice straw hydrolysate using C. glutamicum ATCC 31831 (pVWEx1-xylB). The strain could grow in different xylose concentrations (of 20, 40, and 60 g/L) in rice straw hydrolysate, and after 120 h fermentation, maximum titer obtained was 42.94 g/L xylonic acid from 60 g/L xylose (Fig. 4). A production yield of 58.48% xylonic acid in hydrolysate is remarkable for sugar acid production with engineered strain of C. glutamicum which is quite tolerant to the inhibitors present in the hydrolysate.
Xylose utilization (open symbols) and xylonic acid production (closed symbols) by C. glutamicum ATCC 31831 (pVWEx1-xylB) in rice straw hydrolysate containing different concentrations of xylose 20 g/L (open diamond), 40 g/L (open square) and 60 g/L (open circle). Xylonic acid production from 20 g/L xylose (closed diamond), 40 g/L xylose (closed square) and 60 g/L xylose (closed circle)
Heterologous expression of genes for the production of varied value-added chemicals were successfully carried out in C. glutamicum, for example, the production of amino acids, sugar alcohol, organic acid, diamines, glycolate and 1,5-diaminopentane (Buschke et al. 2013; Meiswinkel et al. 2013; Zahoor et al. 2014; Pérez-García et al. 2016; Dhar et al. 2016). C. glutamicum being a versatile industrial microbe and the availability of genetic engineering tools makes it a rapid and rational manipulation host for diverse platform chemicals. Most corynebacteria are known not to utilize xylose as carbon source. The absence of xylose metabolizing genes restricts the growth of Corynebacterium in pentose rich medium. To develop an efficient bioconversion system for xylonic acid synthesis, the genes of Caulobacter crescentus were expressed in C. glutamicum. The resulting transformants C. glu-pVWEx1-xylB and C.glu-pVWEx1-xylBC were able to grow in mineral medium containing xylose and converted it into corresponding pentonic acid.
Xylose can be metabolized in four different routes (I) The oxido-reductase pathway, (II) The isomerase pathway, (III) The Weimberg pathway, an oxidative pathway and (IV) The Dahms pathway (Cabulong et al. 2018). Xylose once inside the cell gets converted to xylonolactone and then into xylonic acid on the expression of two genes namely, xylB (xylose dehydrogenase) and xylC (xylonolactonase). These two enzymes are involved in both the Weimberg and Dahms pathway where xylose is metabolized to xylonic acid (Brüsseler et al. 2019). In the present study, it is observed that only the xylose dehydrogenase enzyme activity is good enough for xylonic acid production. Without the dehydrogenase activity, the lactonase activity alone cannot do the conversion of xylose to xylonic acid. Further, the xylonolactonase expression along with xylose dehydrogenase resulted in xylonic acid production but not that efficient as dehydrogenase alone with the case of C. glutamicum. It is reported that, xylonolactone once formed can be converted to xylonic acid either by the spontaneous oxidation of lactone or through the enzymatic hydrolysis of xylonolactonase enzyme (Buchert and Viikari 1988). Corynebacterium glutamicum being an aerobic organism, direct oxidation of xylonolactone to xylonic acid is more favorable inside the cell. Previous studies have also shown that xylose dehydrogenase (xylB) activity alone can result in the production of xylonic acid (Yim et al. 2017).
Corynebacterium glutamicum ATCC 31831 grew on pentose as the sole carbon source. The gene cluster responsible for pentose utilization comprised a six-cistron transcriptional unit with a total length of 7.8 kb. The sequence of the C. glutamicum ATCC 31831 ara gene cluster containing gene araE, encodes pentose transporter, facilitates the efficient uptake of pentose sugar (Kawaguchi et al. 2009). Previous studies have also reported the role of araE pentose transporter in Corynebacterium glutamicum ATCC 31831 and its exploitation for the production of commodity chemicals like 3HP and ethanol (Becker et al. 2018). In the present study, Corynebacterium glutamicum ATCC 31831 with an inbuilt araE pentose transporter exhibited effectual consumption of xylose as well as its conversion to xylonic acid. Further studies have to be done to explore the role of the same araE pentose transporter as an exporter for xylonic acid.
Micrococcus spp., Pseudomonas, Kluveromyces lactis, Caulobacter, Enterobacter, Gluconobacter, Klebsiella and Pseudoduganella danionis (ISHIZAKI et al. 1973; Buchert et al. 1988; Buchert and Viikari 1988; Toivari et al. 2011; Wiebe et al. 2015; Wang et al. 2016; Sundar Lekshmi et al. 2019) are the non-recombinant strains reported for xylonic acid production. Among which Gluconobacter oxydans is the prominent wild-type strain exhibits higher titers of xylonic acid up to 100 g L−1 (Toivari et al. 2012). Although these strains are capable of producing xylonic acid from pure sugar, they fail to perform as an industrial strain since some are opportunistic pathogen grade and they are not tested in hydrolysate medium may be due to their lower tolerance towards lignocellulosic inhibitors. There was an earlier report on recombinant C. glutamicum ATCC 13032 produced 6.23 g L−1 of xylonic acid from 20 g L−1 of xylan (Yim et al. 2017). In this study they have employed multiple modules, (i) xylan degradation module (ii) conversion module from xylose to xylonic acid by expression of xdh gene and (iii) xylose transport module by expression of xylE gene, and optimized gene expression introducing promoters (Yim et al. 2017). The product titers with C. glutamicum ATCC 31831 presented in this study are comparable with other wild type and recombinant strains (Table 4) and the volumetric productivity in the feed phase can outperform the titers published employing the recombinant C. glutamicum ATCC 13032.
Table 4 Comparison of xylonic acid production and productivity by the best xylonic acid producers
Media engineering was carried out with the statistical tool response surface methodology (RSM) for the enhanced production of xylonic acid. The Box–Behnken model with experimental values containing 15 runs was designed for the optimization study. RSM aided to narrow down the most influencing parameters and its optimization on xylonic acid production. The engineered strain produced up to 56.3 g/L of xylonic acid and is characterized by high volumetric productivity and maximum product yield of 76.67% under optimized conditions applying defined xylose/glucose mixtures in synthetic medium. One of the major challenges is the range of acidic and furan aldehyde compounds released from lignocellulosic pre-treatment. Here, the recombinant C. glutamicum ATCC 31831 could resist the inhibitors present in rice straw hydrolysate and produced xylonic acid nearly to 58.5% of the maximum possible yield.
The challenges involve getting sufficient xylose after pretreatment and also the separation of xylonic acid from the fermented broth. For the industrial application, downstream processing of xylonic acid is very important. Ethanol precipitation and product recovery by extraction are the two interesting options described for the purification of xylonic acid from the fermentation broth (Liu et al. 2012). With this industrially streamlined recombinant strain a highly profitable bioprocess to produce xylonic acid from lignocellulosic biomass as a cost-efficient second-generation substrate is well within the reach. The one-step conversion of xylose to xylonic acid and the bioprocess developed in the present study favors pentose sugar utilization in rice straw in a straight forward and cost-effective method. The proof of concept showed the simultaneous utilization of biomass-derived sugars (C5 and C6) and it has to be investigated in detail.
All data generated or analysed during this study are included in this published article and its additional files.
Abe S, Takayama K-I, Kinoshita S (1967) Taxonomical studies on glutamic acid-producing bacteria. J Gen Appl Microbiol 13:279–301. https://doi.org/10.2323/jgam.13.279
Baritugo K-A, Kim HT, David Y, Choi J, Hong SH, Jeong KJ, Choi JH, Joo JC, Park SJ (2018) Metabolic engineering of Corynebacterium glutamicum for fermentative production of chemicals in biorefinery. Appl Microbiol Biotechnol 102:3915–3937. https://doi.org/10.1007/s00253-018-8896-6
Becker J, Rohles CM, Wittmann C (2018) Metabolically engineered Corynebacterium glutamicum for bio-based production of chemicals, fuels, materials, and healthcare products. Metab Eng. https://doi.org/10.1016/j.ymben.2018.07.008
Box GEP, Behnken DW (1960) Some new three level designs for the study of quantitative variables. Technometrics 2:455–475. https://doi.org/10.1080/00401706.1960.10489912
Brüsseler C, Späth A, Sokolowsky S, Marienhagen J (2019) Alone at last!—heterologous expression of a single gene is sufficient for establishing the five-step Weimberg pathway in Corynebacterium glutamicum. Metab Eng Commun 9:e00090. https://doi.org/10.1016/j.mec.2019.e00090
Buchert J, Viikari L (1988) The role of xylonolactone in xylonic acid production by Pseudomonas fragi. Appl Microbiol Biotechnol 27:333–336. https://doi.org/10.1007/BF00251763
Buchert J, Puls J, Poutanen K (1988) Comparison of Pseudomonas fragi and Gluconobacter oxydans for production of xylonic acid from hemicellulose hydrolyzates. Appl Microbiol Biotechnol 28:367–372. https://doi.org/10.1007/BF00268197
Buschke N, Becker J, Schäfer R, Kiefer P, Biedendieck R, Wittmann C (2013) Systems metabolic engineering of xylose-utilizing Corynebacterium glutamicum for production of 1,5-diaminopentane. Biotechnol J 8:557–570. https://doi.org/10.1002/biot.201200367
Byong-Wa C, Benita D, Macuch PJ, Debbie W, Charlotte P, Ara J (2006) The development of cement and concrete additive. Appl Biochem Biotechnol 131:645–658. https://doi.org/10.1385/abab:131:1:645
Cabulong RB, Lee W-K, Bañares AB, Ramos KRM, Nisola GM, Valdehuesa KNG, Chung W-J (2018) Engineering Escherichia coli for glycolic acid production from d-xylose through the Dahms pathway and glyoxylate bypass. Appl Microbiol Biotechnol 102:2179–2189. https://doi.org/10.1007/s00253-018-8744-8
Cao Y, Xian M, Zou H, Zhang H (2013) Metabolic engineering of Escherichia coli for the production of xylonate. PLoS ONE 8:e67305. https://doi.org/10.1371/journal.pone.0067305
Choi JW, Jeon EJ, Jeong KJ (2019) Recent advances in engineering Corynebacterium glutamicum for utilization of hemicellulosic biomass. Curr Opin Biotechnol 57:17–24. https://doi.org/10.1016/j.copbio.2018.11.004
Chung CT, Miller RH (1993) Preparation and storage of competent Escherichia coli cells. Methods Enzymol 218:621–627. https://doi.org/10.1016/0076-6879(93)18045-E
de Arruda PV, de Cássia Lacerda Brambilla Rodrigu R, da Silva DDV, de Almeida Felipe M (2011) Evaluation of hexose and pentose in pre-cultivation of Candida guilliermondii on the key enzymes for xylitol production in sugarcane hemicellulosic hydrolysate. Biodegradation 22:815–822. https://doi.org/10.1007/s10532-010-9397-1
Dhar KS, Wendisch VF, Nampoothiri KM (2016) Engineering of Corynebacterium glutamicum for xylitol production from lignocellulosic pentose sugars. J Biotechnol 230:63–71. https://doi.org/10.1016/j.jbiotec.2016.05.011
Gopinath V, Meiswinkel TM, Wendisch VF, Nampoothiri KM (2011) Amino acid production from rice straw and wheat bran hydrolysates by recombinant pentose-utilizing Corynebacterium glutamicum. Appl Microbiol Biotechnol 92:985–996. https://doi.org/10.1007/s00253-011-3478-x
Hanahan D (1983) Studies on transformation of Escherichia coli with plasmids. J Mol Biol 166:557–580. https://doi.org/10.1016/S0022-2836(83)80284-8
Heider SAE, Wendisch VF (2015) Engineering microbial cell factories: metabolic engineering of Corynebacterium glutamicum with a focus on non-natural products. Biotechnol J 10:1170–1184. https://doi.org/10.1002/biot.201400590
Hirasawa T, Shimizu H (2016) Recent advances in amino acid production by microbial cells. Curr Opin Biotechnol 42:133–146. https://doi.org/10.1016/j.copbio.2016.04.017
Ishizaki H, Ihara T, Yoshitake J, Shimamura M, Imai T (1973) d-Xylonic acid production by Enterobacter cloacae. J Agric Chem Soc Jpn 47:755–761. https://doi.org/10.1271/nogeikagaku1924.47.755
Johanna B, Liisa V (1988) Oxidative d-xylose metabolism of Gluconobacter oxydans. Appl Microbiol Biotechnol 29:375–379
Kawaguchi H, Sasaki M, Vertes AA, Inui M, Yukawa H (2009) Identification and functional analysis of the gene cluster for l-arabinose utilization in Corynebacterium glutamicum. Appl Environ Microbiol 75:3419–3429. https://doi.org/10.1128/AEM.02912-08
Keilhauer C, Eggeling L, Sahm H (1993) Isoleucine synthesis in Corynebacterium glutamicum: molecular analysis of the ilvB-ilvN-ilvC operon. J Bacteriol 175:5595–5603. https://doi.org/10.1128/JB.175.17.5595-5603.1993
Kinoshita S, Udaka S, Shimono M (2004) Studies on the amino acid fermentation Part 1 Production of l-glutamic acid by various microorganisms. J Gen Appl Microbiol 50(6):331–343
Lin T-H, Huang C-F, Guo G-L, Hwang W-S, Huang S-L (2012) Pilot-scale ethanol production from rice straw hydrolysates using xylose-fermenting Pichia stipitis. Bioresour Technol 116:314–319. https://doi.org/10.1016/j.biortech.2012.03.089
Liu H, Valdehuesa KNG, Nisola GM, Ramos KRM, Chung W-J (2012) High yield production of d-xylonic acid from d-xylose using engineered Escherichia coli. Bioresour Technol 115:244–248. https://doi.org/10.1016/j.biortech.2011.08.065
Matano C, Meiswinkel TM, Wendisch VF (2014) Amino acid production from rice straw hydrolyzates. In: Wheat and rice in disease prevention and health. pp 493–505
Meijnen JP, De Winde JH, Ruijssenaars HJ (2009) Establishment of oxidative D-xylose metabolism in Pseudomonas putida S12. Appl Environ Microbiol 75:2784–2791. https://doi.org/10.1128/AEM.02713-08
Meiswinkel TM, Gopinath V, Lindner SN, Nampoothiri KM, Wendisch VF (2013) Accelerated pentose utilization by Corynebacterium glutamicum for accelerated production of lysine, glutamate, ornithine and putrescine. Microb Biotechnol 6:131–140. https://doi.org/10.1111/1751-7915.12001
Mussatto SI, Teixeira JA (2010) Lignocellulose as raw material in fermentation processes. Microbial Biotechnol 2:897–907
Nygård Y, Toivari MH, Penttilä M, Ruohonen L, Wiebe MG (2011) Bioconversion of d-xylose to d-xylonate with Kluyveromyces lactis. Metab Eng 13:383–391. https://doi.org/10.1016/j.ymben.2011.04.001
Ou MS, Ingram LO, Shanmugam KT (2011) L(+)-Lactic acid production from non-food carbohydrates by thermotolerant Bacillus coagulans. J Ind Microbiol Biotechnol 38:599–605. https://doi.org/10.1007/s10295-010-0796-4
Pérez-García F, Peters-Wendisch P, Wendisch VF (2016) Engineering Corynebacterium glutamicum for fast production of l-lysine and l-pipecolic acid. Appl Microbiol Biotechnol 100:8075–8090. https://doi.org/10.1007/s00253-016-7682-6
Peters-Wendisch PG, Schiel B, Wendisch VF, Katsoulidis E, Möckel B, Sahm H, Eikmanns BJ (2001) Pyruvate carboxylase is a major bottleneck for glutamate and lysine production by Corynebacterium glutamicum. J Mol Biol Biotechnol 3(2):295–300
Raganati F, Olivieri G, Götz P, Marzocchella A, Salatino P (2015) Butanol production from hexoses and pentoses by fermentation of Clostridium acetobutylicum. Anaerobe 34:146–155. https://doi.org/10.1016/j.anaerobe.2015.05.008
Sakai S, Tsuchida Y, Okino S, Ichihashi O, Kawaguchi H, Watanabe T, Inui M, Yukawa H (2007) Effect of lignocellulose-derived inhibitors on growth of and ethanol production by growth-arrested Corynebacterium glutamicum R. Appl Environ Microbiol 73:2349–2353. https://doi.org/10.1128/AEM.02880-06
Sambrook BJ, Maccallum P, Russell D (2006) Molecular cloning: a laboratory manual to order or request additional information. Mol Clon 1:1–3
Stephens C, Christen B, Fuchs T, Sundaram V, Watanabe K, Jenal U (2007) Genetic analysis of a novel pathway for d-xylose metabolism in Caulobacter crescentus. J Bacteriol 189:2181–2185. https://doi.org/10.1128/JB.01438-06
Sundar Lekshmi MS, Susmitha A, Soumya MP, Keerthi Sasikumar, Nampoothiri Madhavan K (2019) Bioconversion of d-xylose to d-xylonic acid by Pseudoduganella danionis. Indian J Exp Biol 57:821–824
Toivari MH, Ruohonen L, Richard P, Penttilä M, Wiebe MG (2010) Saccharomyces cerevisiae engineered to produce D-xylonate. Appl Microbiol Biotechnol 88:751–760. https://doi.org/10.1007/s00253-010-2787-9
Toivari MH, Penttil M, Ruohonen L, Wiebe MG, Nyg Y (2011) Bioconversion of d-xylose to d-xylonate with Kluyveromyces lactis. Metab Eng 13:383–391. https://doi.org/10.1016/j.ymben.2011.04.001
Toivari M, Nygård Y, Kumpula EP, Vehkomäki ML, Benčina M, Valkonen M, Maaheimo H, Andberg M, Koivula A, Ruohonen L, Penttilä M, Wiebe MG (2012) Metabolic engineering of Saccharomyces cerevisiae for bioconversion of d-xylose to d-xylonate. Metab Eng 14:427–436. https://doi.org/10.1016/j.ymben.2012.03.002
van der Rest ME, Lange C, Molenaar D (1999) A heat shock following electroporation induces highly efficient transformation of Corynebacterium glutamicum with xenogeneic plasmid DNA. Appl Microbiol Biotechnol 52:541–545. https://doi.org/10.1007/s002530051557
Wang C, Wei D, Zhang Z, Wang D, Shi J, Kim CH, Jiang B, Han Z, Hao J (2016) Production of xylonic acid by Klebsiella pneumoniae. Appl Microbiol Biotechnol 100:10055–10063. https://doi.org/10.1007/s00253-016-7825-9
Wei Niu, Mapitso Molefe N, Frost JW (2003) Microbial synthesis of the energetic material precursor 1,2,4-butanetriol. J Am Chem Soc 125:12998–12999
Wendisch VF, Brito LF, Gil Lopez M, Hennig G, Pfeifenschneider J, Sgobba E, Veldmann KH (2016) The flexible feedstock concept in industrial biotechnology: metabolic engineering of Escherichia coli, Corynebacterium glutamicum, Pseudomonas, Bacillus and yeast strains for access to alternative carbon sources. J Biotechnol 234:139–157. https://doi.org/10.1016/j.jbiotec.2016.07.022
Wiebe MG, Nygård Y, Oja M, Andberg M, Ruohonen L, Koivula A, Penttilä M, Toivari M (2015) A novel aldose-aldose oxidoreductase for co-production of d-xylonate and xylitol from d-xylose with Saccharomyces cerevisiae. Appl Microbiol Biotechnol 99:9439–9447. https://doi.org/10.1007/s00253-015-6878-5
Wisselink HW, Toirkens MJ, Wu Q, Pronk JT, van Maris AJA (2009) Novel evolutionary engineering approach for accelerated utilization of glucose, xylose, and arabinose mixtures by engineered Saccharomyces cerevisiae strains. Appl Environ Microbiol 75:907–914. https://doi.org/10.1128/AEM.02268-08
Yim SS, Choi JW, Lee SH, Jeon EJ, Chung WJ, Jeong KJ (2017) Engineering of Corynebacterium glutamicum for consolidated conversion of hemicellulosic biomass into xylonic acid. Biotechnol J 12:1–9. https://doi.org/10.1002/biot.201700040
Zahoor A, Otten A, Wendisch VF (2014) Metabolic engineering of Corynebacterium glutamicum for glycolate production. J Biotechnol 192:366–375. https://doi.org/10.1016/j.jbiotec.2013.12.020
The first author LS acknowledges the Senior Research Fellowship (SRF) by Council of Scientific and Innovative Research (CSIR), New Delhi. KMN and VFW acknowledge the financial assistance from DBT, New Delhi BMBF, and Germany to work on Corynebacterium glutamicum.
The study is funded by DBT, New Delhi and BMBF, Germany under Indo German collaboration.
Microbial Processes and Technology Division, CSIR–National Institute for Interdisciplinary Science and Technology (NIIST), Thiruvananthapuram, 695019, Kerala, India
M. S. Lekshmi Sundar, Aliyath Susmitha, Devi Rajan, Keerthi Sasikumar & K. Madhavan Nampoothiri
Academy of Scientific and Innovative Research (AcSIR), CSIR-National Institute for Interdisciplinary Science and Technology (CSIR-NIIST), Thiruvananthapuram, 695019, Kerala, India
M. S. Lekshmi Sundar, Aliyath Susmitha, Keerthi Sasikumar & K. Madhavan Nampoothiri
Genetics of Prokaryotes, Faculty of Biology & CeBiTec, Bielefeld University, Bielefeld, Germany
Silvin Hannibal & Volker F. Wendisch
M. S. Lekshmi Sundar
Aliyath Susmitha
Devi Rajan
Silvin Hannibal
Keerthi Sasikumar
Volker F. Wendisch
K. Madhavan Nampoothiri
LS, the first author executed majority of the work and wrote the article.SA, SH and KS contributed in the molecular biology aspects of the work while DR involved in the RSM studies. VFW helped in critical reading of manuscript. KMN, the corresponding author who conceived and designed the research and helped to prepare the manuscript. All authors read and approved the manuscript.
Correspondence to K. Madhavan Nampoothiri.
The authors declare that they have no conflict of interest regarding this manuscript. This article doesn't contain any studies performed with animals or humans by any of the authors.
The authors declare(s) that they have no competing interests.
Growth (circles) and xylose consumption (triangles) by C. glutamicum ATCC 13032 (pVWEx1-xylB) (open symbols) and C. glutamicum ATCC 31831 (pVWEx1-xylB) (closed symbols) in CGXII medium containing 60 g/L xylose.
Sundar, M.S.L., Susmitha, A., Rajan, D. et al. Heterologous expression of genes for bioconversion of xylose to xylonic acid in Corynebacterium glutamicum and optimization of the bioprocess. AMB Expr 10, 68 (2020). https://doi.org/10.1186/s13568-020-01003-9
Corynebacterium glutamicum
Heterologous expression
Response surface methodology (RSM)
Xylonic acid
Xylose dehydrogenase
|
CommonCrawl
|
Graph partition
In mathematics, a graph partition is the reduction of a graph to a smaller graph by partitioning its set of nodes into mutually exclusive groups. Edges of the original graph that cross between the groups will produce edges in the partitioned graph. If the number of resulting edges is small compared to the original graph, then the partitioned graph may be better suited for analysis and problem-solving than the original. Finding a partition that simplifies graph analysis is a hard problem, but one that has applications to scientific computing, VLSI circuit design, and task scheduling in multiprocessor computers, among others.[1] Recently, the graph partition problem has gained importance due to its application for clustering and detection of cliques in social, pathological and biological networks. For a survey on recent trends in computational methods and applications see Buluc et al. (2013).[2] Two common examples of graph partitioning are minimum cut and maximum cut problems.
Problem complexity
Typically, graph partition problems fall under the category of NP-hard problems. Solutions to these problems are generally derived using heuristics and approximation algorithms.[3] However, uniform graph partitioning or a balanced graph partition problem can be shown to be NP-complete to approximate within any finite factor.[1] Even for special graph classes such as trees and grids, no reasonable approximation algorithms exist,[4] unless P=NP. Grids are a particularly interesting case since they model the graphs resulting from Finite Element Model (FEM) simulations. When not only the number of edges between the components is approximated, but also the sizes of the components, it can be shown that no reasonable fully polynomial algorithms exist for these graphs.[4]
Problem
Consider a graph G = (V, E), where V denotes the set of n vertices and E the set of edges. For a (k,v) balanced partition problem, the objective is to partition G into k components of at most size v · (n/k), while minimizing the capacity of the edges between separate components.[1] Also, given G and an integer k > 1, partition V into k parts (subsets) V1, V2, ..., Vk such that the parts are disjoint and have equal size, and the number of edges with endpoints in different parts is minimized. Such partition problems have been discussed in literature as bicriteria-approximation or resource augmentation approaches. A common extension is to hypergraphs, where an edge can connect more than two vertices. A hyperedge is not cut if all vertices are in one partition, and cut exactly once otherwise, no matter how many vertices are on each side. This usage is common in electronic design automation.
Analysis
For a specific (k, 1 + ε) balanced partition problem, we seek to find a minimum cost partition of G into k components with each component containing a maximum of (1 + ε)·(n/k) nodes. We compare the cost of this approximation algorithm to the cost of a (k,1) cut, wherein each of the k components must have the same size of (n/k) nodes each, thus being a more restricted problem. Thus,
$\max _{i}|V_{i}|\leq (1+\varepsilon )\left\lceil {\frac {|V|}{k}}\right\rceil .$
We already know that (2,1) cut is the minimum bisection problem and it is NP-complete.[5] Next, we assess a 3-partition problem wherein n = 3k, which is also bounded in polynomial time.[1] Now, if we assume that we have a finite approximation algorithm for (k, 1)-balanced partition, then, either the 3-partition instance can be solved using the balanced (k,1) partition in G or it cannot be solved. If the 3-partition instance can be solved, then (k, 1)-balanced partitioning problem in G can be solved without cutting any edge. Otherwise, if the 3-partition instance cannot be solved, the optimum (k, 1)-balanced partitioning in G will cut at least one edge. An approximation algorithm with a finite approximation factor has to differentiate between these two cases. Hence, it can solve the 3-partition problem which is a contradiction under the assumption that P = NP. Thus, it is evident that (k,1)-balanced partitioning problem has no polynomial-time approximation algorithm with a finite approximation factor unless P = NP.[1]
The planar separator theorem states that any n-vertex planar graph can be partitioned into roughly equal parts by the removal of O(√n) vertices. This is not a partition in the sense described above, because the partition set consists of vertices rather than edges. However, the same result also implies that every planar graph of bounded degree has a balanced cut with O(√n) edges.
Graph partition methods
Since graph partitioning is a hard problem, practical solutions are based on heuristics. There are two broad categories of methods, local and global. Well-known local methods are the Kernighan–Lin algorithm, and Fiduccia-Mattheyses algorithms, which were the first effective 2-way cuts by local search strategies. Their major drawback is the arbitrary initial partitioning of the vertex set, which can affect the final solution quality. Global approaches rely on properties of the entire graph and do not rely on an arbitrary initial partition. The most common example is spectral partitioning, where a partition is derived from approximate eigenvectors of the adjacency matrix, or spectral clustering that groups graph vertices using the eigendecomposition of the graph Laplacian matrix.
Multi-level methods
Main article: Multi-level technique
A multi-level graph partitioning algorithm works by applying one or more stages. Each stage reduces the size of the graph by collapsing vertices and edges, partitions the smaller graph, then maps back and refines this partition of the original graph.[6] A wide variety of partitioning and refinement methods can be applied within the overall multi-level scheme. In many cases, this approach can give both fast execution times and very high quality results. One widely used example of such an approach is METIS,[7] a graph partitioner, and hMETIS, the corresponding partitioner for hypergraphs.[8] An alternative approach originated from [9] and implemented, e.g., in scikit-learn is spectral clustering with the partitioning determined from eigenvectors of the graph Laplacian matrix for the original graph computed by LOBPCG solver with multigrid preconditioning.
Spectral partitioning and spectral bisection
Given a graph $G=(V,E)$ with adjacency matrix $A$, where an entry $A_{ij}$ implies an edge between node $i$ and $j$, and degree matrix $D$, which is a diagonal matrix, where each diagonal entry of a row $i$, $d_{ii}$, represents the node degree of node $i$. The Laplacian matrix $L$ is defined as $L=D-A$. Now, a ratio-cut partition for graph $G=(V,E)$ is defined as a partition of $V$ into disjoint $U$, and $W$, minimizing the ratio
${\frac {|E(G)\cap (U\times W)|}{|U|\cdot |W|}}$
of the number of edges that actually cross this cut to the number of pairs of vertices that could support such edges. Spectral graph partitioning can be motivated[10] by analogy with partitioning of a vibrating string or a mass-spring system and similarly extended to the case of negative weights of the graph.[11]
Fiedler eigenvalue and eigenvector
In such a scenario, the second smallest eigenvalue ($\lambda _{2}$) of $L$, yields a lower bound on the optimal cost ($c$) of ratio-cut partition with $c\geq {\frac {\lambda _{2}}{n}}$. The eigenvector ($V_{2}$) corresponding to $\lambda _{2}$, called the Fiedler vector, bisects the graph into only two communities based on the sign of the corresponding vector entry. Division into a larger number of communities can be achieved by repeated bisection or by using multiple eigenvectors corresponding to the smallest eigenvalues.[12] The examples in Figures 1,2 illustrate the spectral bisection approach.
Modularity and ratio-cut
Minimum cut partitioning however fails when the number of communities to be partitioned, or the partition sizes are unknown. For instance, optimizing the cut size for free group sizes puts all vertices in the same community. Additionally, cut size may be the wrong thing to minimize since a good division is not just one with small number of edges between communities. This motivated the use of Modularity (Q)[13] as a metric to optimize a balanced graph partition. The example in Figure 3 illustrates 2 instances of the same graph such that in (a) modularity (Q) is the partitioning metric and in (b), ratio-cut is the partitioning metric.
Applications
Conductance
Another objective function used for graph partitioning is Conductance which is the ratio between the number of cut edges and the volume of the smallest part. Conductance is related to electrical flows and random walks. The Cheeger bound guarantees that spectral bisection provides partitions with nearly optimal conductance. The quality of this approximation depends on the second smallest eigenvalue of the Laplacian λ2.
Immunization
Graph partition can be useful for identifying the minimal set of nodes or links that should be immunized in order to stop epidemics.[14]
Other graph partition methods
Spin models have been used for clustering of multivariate data wherein similarities are translated into coupling strengths.[15] The properties of ground state spin configuration can be directly interpreted as communities. Thus, a graph is partitioned to minimize the Hamiltonian of the partitioned graph. The Hamiltonian (H) is derived by assigning the following partition rewards and penalties.
• Reward internal edges between nodes of same group (same spin)
• Penalize missing edges in same group
• Penalize existing edges between different groups
• Reward non-links between different groups.
Additionally, Kernel-PCA-based Spectral clustering takes a form of least squares Support Vector Machine framework, and hence it becomes possible to project the data entries to a kernel induced feature space that has maximal variance, thus implying a high separation between the projected communities.[16]
Some methods express graph partitioning as a multi-criteria optimization problem which can be solved using local methods expressed in a game theoretic framework where each node makes a decision on the partition it chooses.[17]
For very large-scale distributed graphs classical partition methods might not apply (e.g., spectral partitioning, Metis[7]) since they require full access to graph data in order to perform global operations. For such large-scale scenarios distributed graph partitioning is used to perform partitioning through asynchronous local operations only.
Software tools
scikit-learn implements spectral clustering with the partitioning determined from eigenvectors of the graph Laplacian matrix for the original graph computed by ARPACK, or by LOBPCG solver with multigrid preconditioning.[9]
Chaco,[18] due to Hendrickson and Leland, implements the multilevel approach outlined above and basic local search algorithms. Moreover, they implement spectral partitioning techniques.
METIS[7] is a graph partitioning family by Karypis and Kumar. Among this family, kMetis aims at greater partitioning speed, hMetis,[8] applies to hypergraphs and aims at partition quality, and ParMetis[7] is a parallel implementation of the Metis graph partitioning algorithm.
PaToH[19] is another hypergraph partitioner.
KaHyPar[20][21][22] is a multilevel hypergraph partitioning framework providing direct k-way and recursive bisection based partitioning algorithms. It instantiates the multilevel approach in its most extreme version, removing only a single vertex in every level of the hierarchy. By using this very fine grained n-level approach combined with strong local search heuristics, it computes solutions of very high quality.
Scotch[23] is graph partitioning framework by Pellegrini. It uses recursive multilevel bisection and includes sequential as well as parallel partitioning techniques.
Jostle[24] is a sequential and parallel graph partitioning solver developed by Chris Walshaw. The commercialized version of this partitioner is known as NetWorks.
Party[25] implements the Bubble/shape-optimized framework and the Helpful Sets algorithm.
The software packages DibaP[26] and its MPI-parallel variant PDibaP[27] by Meyerhenke implement the Bubble framework using diffusion; DibaP also uses AMG-based techniques for coarsening and solving linear systems arising in the diffusive approach.
Sanders and Schulz released a graph partitioning package KaHIP[28] (Karlsruhe High Quality Partitioning) that implements for example flow-based methods, more-localized local searches and several parallel and sequential meta-heuristics.
The tools Parkway[29] by Trifunovic and Knottenbelt as well as Zoltan[30] by Devine et al. focus on hypergraph partitioning.
References
1. Andreev, Konstantin; Räcke, Harald (2004). "Balanced graph partitioning". Proceedings of the sixteenth annual ACM symposium on Parallelism in algorithms and architectures. Barcelona, Spain. pp. 120–124. CiteSeerX 10.1.1.417.8592. doi:10.1145/1007912.1007931. ISBN 978-1-58113-840-5.{{cite book}}: CS1 maint: location missing publisher (link)
2. Buluc, Aydin; Meyerhenke, Henning; Safro, Ilya; Sanders, Peter; Schulz, Christian (2013). "Recent Advances in Graph Partitioning". arXiv:1311.3144 [cs.DS].
3. Feldmann, Andreas Emil; Foschini, Luca (2012). "Balanced Partitions of Trees and Applications". Proceedings of the 29th International Symposium on Theoretical Aspects of Computer Science: 100–111.
4. Feldmann, Andreas Emil (2012). "Fast Balanced Partitioning is Hard, Even on Grids and Trees". Proceedings of the 37th International Symposium on Mathematical Foundations of Computer Science. arXiv:1111.6745. Bibcode:2011arXiv1111.6745F.
5. Garey, Michael R.; Johnson, David S. (1979). Computers and intractability: A guide to the theory of NP-completeness. W. H. Freeman & Co. ISBN 978-0-7167-1044-8.
6. Hendrickson, B.; Leland, R. (1995). A multilevel algorithm for partitioning graphs. Proceedings of the 1995 ACM/IEEE conference on Supercomputing. ACM. p. 28.
7. Karypis, G.; Kumar, V. (1999). "A fast and high quality multilevel scheme for partitioning irregular graphs". SIAM Journal on Scientific Computing. 20 (1): 359. CiteSeerX 10.1.1.39.3415. doi:10.1137/S1064827595287997. S2CID 3628209.
8. Karypis, G.; Aggarwal, R.; Kumar, V.; Shekhar, S. (1997). Multilevel hypergraph partitioning: application in VLSI domain. Proceedings of the 34th annual Design Automation Conference. pp. 526–529.
9. Knyazev, Andrew V. (2006). Multiscale Spectral Graph Partitioning and Image Segmentation. Workshop on Algorithms for Modern Massive Data Sets Stanford University and Yahoo! Research.
10. J. Demmel, , CS267: Notes for Lecture 23, April 9, 1999, Graph Partitioning, Part 2
11. Knyazev, Andrew (2018). On spectral partitioning of signed graphs. Eighth SIAM Workshop on Combinatorial Scientific Computing, CSC 2018, Bergen, Norway, June 6–8. doi:10.1137/1.9781611975215.2.
12. Naumov, M.; Moon, T. (2016). "Parallel Spectral Graph Partitioning". NVIDIA Technical Report. nvr-2016-001.
13. Newman, M. E. J. (2006). "Modularity and community structure in networks". PNAS. 103 (23): 8577–8696. arXiv:physics/0602124. Bibcode:2006PNAS..103.8577N. doi:10.1073/pnas.0601602103. PMC 1482622. PMID 16723398.
14. Y. Chen, G. Paul, S. Havlin, F. Liljeros, H.E. Stanley (2009). "Finding a Better Immunization Strategy". Phys. Rev. Lett. 101 (5): 058701. doi:10.1103/PhysRevLett.101.058701. PMID 18764435.{{cite journal}}: CS1 maint: multiple names: authors list (link)
15. Reichardt, Jörg; Bornholdt, Stefan (July 2006). "Statistical mechanics of community detection". Phys. Rev. E. 74 (1): 016110. arXiv:cond-mat/0603718. Bibcode:2006PhRvE..74a6110R. doi:10.1103/PhysRevE.74.016110. PMID 16907154. S2CID 792965.
16. Alzate, Carlos; Suykens, Johan A. K. (2010). "Multiway Spectral Clustering with Out-of-Sample Extensions through Weighted Kernel PCA". IEEE Transactions on Pattern Analysis and Machine Intelligence. 32 (2): 335–347. doi:10.1109/TPAMI.2008.292. ISSN 0162-8828. PMID 20075462. S2CID 200488.
17. Kurve, A.; Griffin, C.; Kesidis G. (2011) "A graph partitioning game for distributed simulation of networks", Proceedings of the 2011 International Workshop on Modeling, Analysis, and Control of Complex Networks: 9–16
18. Hendrickson, Bruce. "Chaco: Software for Partitioning Graphs". {{cite journal}}: Cite journal requires |journal= (help)
19. Catalyürek, Ü.; Aykanat, C. (2011). PaToH: Partitioning Tool for Hypergraphs.
20. Schlag, S.; Henne, V.; Heuer, T.; Meyerhenke, H.; Sanders, P.; Schulz, C. (2015-12-30). "K-way Hypergraph Partitioning via n-Level Recursive Bisection". 2016 Proceedings of the Eighteenth Workshop on Algorithm Engineering and Experiments (ALENEX). Proceedings. Society for Industrial and Applied Mathematics. pp. 53–67. arXiv:1511.03137. doi:10.1137/1.9781611974317.5. ISBN 978-1-61197-431-7. S2CID 1674598.
21. Akhremtsev, Y.; Heuer, T.; Sanders, P.; Schlag, S. (2017-01-01). "Engineering a direct k-way Hypergraph Partitioning Algorithm". 2017 Proceedings of the Nineteenth Workshop on Algorithm Engineering and Experiments (ALENEX). Proceedings. Society for Industrial and Applied Mathematics. pp. 28–42. doi:10.1137/1.9781611974768.3. ISBN 978-1-61197-476-8.
22. Heuer, Tobias; Schlag, Sebastian (2017). Iliopoulos, Costas S.; Pissis, Solon P.; Puglisi, Simon J.; Raman, Rajeev (eds.). Improving Coarsening Schemes for Hypergraph Partitioning by Exploiting Community Structure. pp. 21:1–21:19. doi:10.4230/LIPIcs.SEA.2017.21. ISBN 9783959770361. {{cite book}}: |journal= ignored (help)
23. Chevalier, C.; Pellegrini, F. (2008). "PT-Scotch: A Tool for Efficient Parallel Graph Ordering". Parallel Computing. 34 (6): 318–331. arXiv:0907.1375. doi:10.1016/j.parco.2007.12.001. S2CID 10433524.
24. Walshaw, C.; Cross, M. (2000). "Mesh Partitioning: A Multilevel Balancing and Refinement Algorithm". Journal on Scientific Computing. 22 (1): 63–80. Bibcode:2000SJSC...22...63W. CiteSeerX 10.1.1.19.1836. doi:10.1137/s1064827598337373.
25. Diekmann, R.; Preis, R.; Schlimbach, F.; Walshaw, C. (2000). "Shape-optimized Mesh Partitioning and Load Balancing for Parallel Adaptive FEM". Parallel Computing. 26 (12): 1555–1581. CiteSeerX 10.1.1.46.5687. doi:10.1016/s0167-8191(00)00043-0.
26. Meyerhenke, H.; Monien, B.; Sauerwald, T. (2008). "A New Diffusion-Based Multilevel Algorithm for Computing Graph Partitions". Journal of Parallel Computing and Distributed Computing. 69 (9): 750–761. doi:10.1016/j.jpdc.2009.04.005. S2CID 9755877.
27. Meyerhenke, H. (2013). Shape Optimizing Load Balancing for MPI-Parallel Adaptive Numerical Simulations. 10th DIMACS Implementation Challenge on Graph Partitioning and Graph Clustering. pp. 67–82.
28. Sanders, P.; Schulz, C. (2011). Engineering Multilevel Graph Partitioning Algorithms. Proceedings of the 19th European Symposium on Algorithms (ESA). Vol. 6942. pp. 469–480.
29. Trifunovic, A.; Knottenbelt, W. J. (2008). "Parallel Multilevel Algorithms for Hypergraph Partitioning". Journal of Parallel and Distributed Computing. 68 (5): 563–581. CiteSeerX 10.1.1.641.7796. doi:10.1016/j.jpdc.2007.11.002.
30. Devine, K.; Boman, E.; Heaphy, R.; Bisseling, R.; Catalyurek, Ü. (2006). Parallel Hypergraph Partitioning for Scientific Computing. Proceedings of the 20th International Conference on Parallel and Distributed Processing. p. 124.
External links
• Chamberlain, Bradford L. (1998). "Graph Partitioning Algorithms for Distributing Workloads of Parallel Computations"
Bibliography
• Bichot, Charles-Edmond; Siarry, Patrick (2011). Graph Partitioning: Optimisation and Applications. ISTE – Wiley. p. 384. ISBN 978-1848212336.
• Feldmann, Andreas Emil (2012). Balanced Partitioning of Grids and Related Graphs: A Theoretical Study of Data Distribution in Parallel Finite Element Model Simulations. Goettingen, Germany: Cuvillier Verlag. p. 218. ISBN 978-3954041251. An exhaustive analysis of the problem from a theoretical point of view.
• Kernighan, B. W.; Lin, S. (1970). "An Efficient Heuristic Procedure for Partitioning Graphs" (PDF). Bell System Technical Journal. 49 (2): 291–307. doi:10.1002/j.1538-7305.1970.tb01770.x. One of the early fundamental works in the field. However, performance is O(n2), so it is no longer commonly used.
• Fiduccia, C. M.; Mattheyses, R. M. (1982). A Linear-Time Heuristic for Improving Network Partitions. Design Automation Conference. doi:10.1109/DAC.1982.1585498. A later variant that is linear time, very commonly used, both by itself and as part of multilevel partitioning, see below.
• Karypis, G.; Kumar, V. (1999). "A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs". SIAM Journal on Scientific Computing. Multi-level partitioning is the current state of the art. This paper also has good explanations of many other methods, and comparisons of the various methods on a wide variety of problems.
• Karypis, G.; Aggarwal, R.; Kumar, V.; Shekhar, S. (March 1999). "Multilevel hypergraph partitioning: applications in VLSI domain". IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 7 (1): 69–79. CiteSeerX 10.1.1.553.2367. doi:10.1109/92.748202. Graph partitioning (and in particular, hypergraph partitioning) has many applications to IC design.
• Kirkpatrick, S.; Gelatt, C. D., Jr.; Vecchi, M. P. (13 May 1983). "Optimization by Simulated Annealing". Science. 220 (4598): 671–680. Bibcode:1983Sci...220..671K. CiteSeerX 10.1.1.123.7607. doi:10.1126/science.220.4598.671. PMID 17813860. S2CID 205939.{{cite journal}}: CS1 maint: multiple names: authors list (link) Simulated annealing can be used as well.
• Hagen, L.; Kahng, A. B. (September 1992). "New spectral methods for ratio cut partitioning and clustering". IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems. 11 (9): 1074–1085. doi:10.1109/43.159993.. There is a whole class of spectral partitioning methods, which use the Eigenvectors of the Laplacian of the connectivity graph. You can see a demo of this, using Matlab.
|
Wikipedia
|
Suppose that $\sec x+\tan x=\frac{22}7$ and that $\csc x+\cot x=\frac mn,$ where $\frac mn$ is in lowest terms. Find $m+n.$
Use the two trigonometric Pythagorean identities $1 + \tan^2 x = \sec^2 x$ and $1 + \cot^2 x = \csc^2 x$.
If we square the given $\sec x = \frac{22}{7} - \tan x$, we find that
\begin{align*} \sec^2 x &= \left(\frac{22}7\right)^2 - 2\left(\frac{22}7\right)\tan x + \tan^2 x \\ 1 &= \left(\frac{22}7\right)^2 - \frac{44}7 \tan x \end{align*}
This yields $\tan x = \frac{435}{308}$.
Let $y = \frac mn$. Then squaring,
\[\csc^2 x = (y - \cot x)^2 \Longrightarrow 1 = y^2 - 2y\cot x.\]
Substituting $\cot x = \frac{1}{\tan x} = \frac{308}{435}$ yields a quadratic equation: $0 = 435y^2 - 616y - 435 = (15y - 29)(29y + 15)$. It turns out that only the positive root will work, so the value of $y = \frac{29}{15}$ and $m + n = \boxed{44}$.
|
Math Dataset
|
Quantum computers are devices which, exploiting intrinsically quantum mechanical phenomena, are believed to be able to perform certain operations more efficiently. While the basic unit of information that is manipulated by classical computers is the bit, quantum computers manipulate quantum states, often in the form two-level quantum systems typically referred to as qubits.
Several models for quantum computation have been proposed and are actively researched. Being the field of quantum computation still not yet fully mature, no computational model is indisputably superior to the others. Arguably the most known one is the circuit model. Among the others, there are measurement-based quantum computation, quantum annealing, and continuous variable quantum computation.
In gate-based quantum computation, gates are represented by matrices, and include types such as the Pauli X (also termed "NOT"), Y, and Z (pronounced "zed") gates, which are single-qubit gates, multiple-qubit gates like the controlled-NOT or CNOT gate and Toffoli gate, and others. The set of single-qubit gates plus the CNOT gate forms a set of universal gates.
Where $\alpha$ through $\zeta$ (and there can be more beyond this) represent the amplitudes of the state, and determine the probability of a particular state resulting upon collapse of the wavefunction upon measurement. Each of the items between the $|\,\rangle$ represents a particular possible state that can occur upon measurement.
When measurement occurs, the qubits become normal, classical bits, which is part of what makes writing algorithms for quantum computers so difficult. The advantage in a quantum computer lies in the fact that the whole system can be, and in fact must be, represented by a single vector. This means that all the qubits share information, and further, any one gate, even if a single-qubit gate, has repercussions on the whole system.
There are many different physical realizations of the quantum computer. There are optical quantum computers, which use photons as qubits, and things like Fabry-Perot cavities, mirrors, beamsplitters, phase shifters, and so forth for gates. There are superconducting quantum computers, which use Josephson Junctions. There are ion-trap quantum computers, which use ions for qubits and hold those still with strong magnetic fields, and then manipulate the state of the ions with lasers. A list of realizations can be found here under "Quantum Computer Science" and "Physical Implementations".
Nielsen and Chuang's Quantum Computing and Quantum Information is the standard textbook for the field.
Michael Nielsen has a series of videos on YouTube called Quantum Computing for the Determined.
It is recommended that you have a base understanding of linear algebra in particular if you wish to learn this subject. Some understanding of quantum mechanics and computer science will be highly useful and something you will at minimum have to learn upon the way.
|
CommonCrawl
|
# Understanding time series data
Time series data is a type of data that is collected over a period of time at regular intervals. It is used to analyze and forecast trends, patterns, and behaviors that occur over time. Understanding time series data is essential for forecasting, as it provides the foundation for building accurate and reliable models.
In time series data, the order of observations is important, as each observation is dependent on the previous ones. This is in contrast to cross-sectional data, where each observation is independent of the others. Time series data can be found in various fields, such as finance, economics, weather forecasting, and sales forecasting.
There are two main components of time series data: trend and seasonality. Trend refers to the long-term increase or decrease in the data, while seasonality refers to the recurring patterns or cycles that occur within a specific time period. Understanding and identifying these components is crucial for accurate forecasting.
To analyze time series data, it is important to visualize the data using plots and graphs. This helps in identifying any patterns, trends, or outliers that may exist in the data. Common plots used for time series analysis include line plots, scatter plots, and autocorrelation plots.
For example, let's consider a time series dataset that tracks the monthly sales of a retail store over a period of two years. By plotting the sales data on a line plot, we can observe any upward or downward trends, as well as any seasonal patterns that may exist.
## Exercise
Consider the following time series data:
Month: January, February, March, April, May, June
Sales: 100, 150, 120, 180, 200, 160
Plot the sales data on a line plot to visualize any trends or patterns.
### Solution
```python
import matplotlib.pyplot as plt
months = ['January', 'February', 'March', 'April', 'May', 'June']
sales = [100, 150, 120, 180, 200, 160]
plt.plot(months, sales)
plt.xlabel('Month')
plt.ylabel('Sales')
plt.title('Monthly Sales')
plt.show()
```
The line plot will show the sales data over the months, allowing us to identify any trends or patterns.
# Forecasting using moving averages
Moving averages are a commonly used technique in forecasting. They provide a simple and effective way to smooth out fluctuations in time series data and identify underlying trends. The basic idea behind moving averages is to calculate the average of a fixed number of consecutive data points and use it as a forecast for the next period.
There are different types of moving averages, including the simple moving average (SMA) and the weighted moving average (WMA). The SMA calculates the average of a fixed number of data points, while the WMA assigns different weights to each data point based on its position in the time series.
To calculate a simple moving average, you need to choose a window size, which is the number of data points to include in the average. For example, if you have monthly sales data and choose a window size of 3, you would calculate the average of the first 3 months and use it as the forecast for the fourth month. Then, you would move the window one month forward and repeat the process.
The formula for calculating the SMA is:
$$SMA_t = \frac{X_{t-1} + X_{t-2} + ... + X_{t-n}}{n}$$
where $SMA_t$ is the moving average at time $t$, $X_{t-1}, X_{t-2}, ..., X_{t-n}$ are the data points within the window, and $n$ is the window size.
Let's say we have the following monthly sales data for a retail store:
Month: January, February, March, April, May, June
Sales: 100, 150, 120, 180, 200, 160
If we choose a window size of 3, the moving averages would be:
- For April: $SMA_{April} = \frac{100 + 150 + 120}{3} = 123.33$
- For May: $SMA_{May} = \frac{150 + 120 + 180}{3} = 150$
- For June: $SMA_{June} = \frac{120 + 180 + 200}{3} = 166.67$
## Exercise
Calculate the 3-month simple moving averages for the following monthly sales data:
Month: January, February, March, April, May, June
Sales: 50, 70, 90, 80, 100, 120
### Solution
```python
sales = [50, 70, 90, 80, 100, 120]
sma_april = (sales[0] + sales[1] + sales[2]) / 3
sma_may = (sales[1] + sales[2] + sales[3]) / 3
sma_june = (sales[2] + sales[3] + sales[4]) / 3
sma_april, sma_may, sma_june
```
The 3-month moving averages for April, May, and June are 70, 86.67, and 93.33, respectively.
# Exponential smoothing and its applications
Exponential smoothing is another popular technique in forecasting. It is particularly useful for time series data that exhibit trend and seasonality. Exponential smoothing assigns exponentially decreasing weights to past observations, with more recent observations receiving higher weights.
The basic idea behind exponential smoothing is to calculate a weighted average of past observations, where the weights decrease exponentially as the observations get older. This allows the forecast to react more quickly to recent changes in the data, while still taking into account the overall trend and seasonality.
There are different types of exponential smoothing methods, including simple exponential smoothing (SES), double exponential smoothing (DES), and triple exponential smoothing (TES). SES is used for data with no trend or seasonality, DES is used for data with trend but no seasonality, and TES is used for data with both trend and seasonality.
The formula for calculating the forecast using simple exponential smoothing is:
$$F_{t+1} = \alpha \cdot X_t + (1 - \alpha) \cdot F_t$$
where $F_{t+1}$ is the forecast for the next period, $X_t$ is the actual observation at time $t$, $F_t$ is the forecast at time $t$, and $\alpha$ is the smoothing parameter (0 < $\alpha$ < 1).
Let's say we have the following monthly sales data for a retail store:
Month: January, February, March, April, May, June
Sales: 100, 150, 120, 180, 200, 160
If we use simple exponential smoothing with $\alpha$ = 0.5, the forecasts would be:
- For February: $F_{February} = 0.5 \cdot 100 + 0.5 \cdot 100 = 100$
- For March: $F_{March} = 0.5 \cdot 150 + 0.5 \cdot 100 = 125$
- For April: $F_{April} = 0.5 \cdot 120 + 0.5 \cdot 125 = 122.5$
- For May: $F_{May} = 0.5 \cdot 180 + 0.5 \cdot 122.5 = 151.25$
- For June: $F_{June} = 0.5 \cdot 200 + 0.5 \cdot 151.25 = 175.63$
## Exercise
Calculate the forecasts for the following monthly sales data using simple exponential smoothing with $\alpha$ = 0.3:
Month: January, February, March, April, May, June
Sales: 50, 70, 90, 80, 100, 120
### Solution
```python
sales = [50, 70, 90, 80, 100, 120]
alpha = 0.3
forecast_feb = alpha * sales[0] + (1 - alpha) * sales[0]
forecast_march = alpha * sales[1] + (1 - alpha) * forecast_feb
forecast_april = alpha * sales[2] + (1 - alpha) * forecast_march
forecast_may = alpha * sales[3] + (1 - alpha) * forecast_april
forecast_june = alpha * sales[4] + (1 - alpha) * forecast_may
forecast_feb, forecast_march, forecast_april, forecast_may, forecast_june
```
The forecasts for February, March, April, May, and June are 50, 61, 75.7, 77.99, and 88.193, respectively.
# Forecast evaluation metrics
Forecast evaluation metrics are used to assess the accuracy and reliability of forecasting models. They provide quantitative measures that allow us to compare different models and choose the one that performs the best. There are several commonly used forecast evaluation metrics, including mean absolute error (MAE), mean squared error (MSE), and root mean squared error (RMSE).
MAE measures the average absolute difference between the forecasted values and the actual values. It is calculated by summing the absolute differences and dividing by the number of observations.
MSE measures the average squared difference between the forecasted values and the actual values. It is calculated by summing the squared differences and dividing by the number of observations.
RMSE is the square root of MSE and provides a measure of the average magnitude of the forecast errors.
The formula for calculating MAE is:
$$MAE = \frac{\sum_{i=1}^{n} |F_i - A_i|}{n}$$
where $MAE$ is the mean absolute error, $F_i$ is the forecasted value at time $i$, $A_i$ is the actual value at time $i$, and $n$ is the number of observations.
The formula for calculating MSE is:
$$MSE = \frac{\sum_{i=1}^{n} (F_i - A_i)^2}{n}$$
where $MSE$ is the mean squared error, $F_i$ is the forecasted value at time $i$, $A_i$ is the actual value at time $i$, and $n$ is the number of observations.
Let's say we have the following actual and forecasted sales data for a retail store:
Month: January, February, March, April, May, June
Actual Sales: 100, 150, 120, 180, 200, 160
Forecasted Sales: 110, 140, 130, 170, 190, 150
To calculate the MAE, we first calculate the absolute differences between the forecasted and actual values:
Absolute Differences: 10, 10, 10, 10, 10, 10
Then, we calculate the average of the absolute differences:
MAE = (10 + 10 + 10 + 10 + 10 + 10) / 6 = 10
To calculate the MSE, we first calculate the squared differences between the forecasted and actual values:
Squared Differences: 100, 100, 100, 100, 100, 100
Then, we calculate the average of the squared differences:
MSE = (100 + 100 + 100 + 100 + 100 + 100) / 6 = 100
The RMSE is the square root of MSE:
RMSE = sqrt(100) = 10
## Exercise
Calculate the MAE, MSE, and RMSE for the following actual and forecasted sales data:
Month: January, February, March, April, May, June
Actual Sales: 50, 70, 90, 80, 100, 120
Forecasted Sales: 60, 80, 100, 90, 110, 130
### Solution
```python
actual_sales = [50, 70, 90, 80, 100, 120]
forecasted_sales = [60, 80, 100, 90, 110, 130]
absolute_differences = [abs(forecasted_sales[i] - actual_sales[i]) for i in range(len(actual_sales))]
squared_differences = [(forecasted_sales[i] - actual_sales[i])**2 for i in range(len(actual_sales))]
mae = sum(absolute_differences) / len(actual_sales)
mse = sum(squared_differences) / len(actual_sales)
rmse = mse ** 0.5
mae, mse, rmse
```
The MAE, MSE, and RMSE are 10, 116.67, and 10.81, respectively.
# Regression models for forecasting
Regression models are widely used in forecasting, especially when there is a relationship between the dependent variable (the variable to be forecasted) and one or more independent variables. Regression models allow us to estimate the values of the dependent variable based on the values of the independent variables.
In forecasting, regression models can be used to capture the relationship between the dependent variable and various factors that influence it. These factors can include historical data, economic indicators, demographic variables, and other relevant factors.
There are different types of regression models, including simple linear regression, multiple linear regression, and polynomial regression. Simple linear regression models the relationship between the dependent variable and a single independent variable, while multiple linear regression models the relationship between the dependent variable and multiple independent variables.
The formula for simple linear regression is:
$$Y = \beta_0 + \beta_1X$$
where $Y$ is the dependent variable, $X$ is the independent variable, $\beta_0$ is the intercept, and $\beta_1$ is the slope.
To estimate the values of $\beta_0$ and $\beta_1$, we can use the least squares method, which minimizes the sum of the squared differences between the observed and predicted values.
Let's say we have the following data for a retail store:
Month: January, February, March, April, May, June
Sales: 100, 150, 120, 180, 200, 160
Advertising Expenses: 10, 20, 15, 25, 30, 20
If we want to use advertising expenses to forecast sales, we can build a simple linear regression model. The model would be:
$$Sales = \beta_0 + \beta_1 \cdot Advertising Expenses$$
By applying the least squares method, we can estimate the values of $\beta_0$ and $\beta_1$:
$\beta_1 = \frac{\sum_{i=1}^{n} (X_i - \bar{X})(Y_i - \bar{Y})}{\sum_{i=1}^{n} (X_i - \bar{X})^2}$
$\beta_0 = \bar{Y} - \beta_1 \cdot \bar{X}$
where $X_i$ and $Y_i$ are the values of the independent and dependent variables, respectively, and $\bar{X}$ and $\bar{Y}$ are the means of the independent and dependent variables, respectively.
## Exercise
Calculate the values of $\beta_0$ and $\beta_1$ for the following data:
Month: January, February, March, April, May, June
Sales: 50, 70, 90, 80, 100, 120
Advertising Expenses: 10, 20, 15, 25, 30, 20
### Solution
```python
sales = [50, 70, 90, 80, 100, 120]
expenses = [10, 20, 15, 25, 30, 20]
mean_sales = sum(sales) / len(sales)
mean_expenses = sum(expenses) / len(expenses)
numerator = sum([(expenses[i] - mean_expenses) * (sales[i] - mean_sales) for i in range(len(sales))])
denominator = sum([(expenses[i] - mean_expenses) ** 2 for i in range(len(expenses))])
beta_1 = numerator / denominator
beta_0 = mean_sales - beta_1 * mean_expenses
beta_0, beta_1
```
The values of $\beta_0$ and $\beta_1$ are 40 and 2, respectively.
# Time series decomposition and trend analysis
Time series decomposition is a technique used to separate a time series into its different components, including trend, seasonality, and residual. Trend analysis focuses on identifying and analyzing the long-term pattern or direction of a time series.
Time series decomposition is useful for understanding the underlying structure of a time series and identifying any patterns or trends that may exist. It allows us to isolate and analyze each component separately, which can be helpful for forecasting and decision making.
There are different methods for time series decomposition, including the classical decomposition method and the moving average decomposition method. The classical decomposition method uses statistical techniques to estimate the trend, seasonality, and residual components, while the moving average decomposition method uses moving averages to estimate the trend component.
Trend analysis involves analyzing the long-term pattern or direction of a time series. It helps in understanding whether the time series is increasing, decreasing, or remaining stable over time. Trend analysis can be done visually using line plots or quantitatively using statistical techniques.
There are different types of trends that can be observed in a time series, including upward trend, downward trend, and no trend. An upward trend indicates that the time series is increasing over time, a downward trend indicates that the time series is decreasing over time, and no trend indicates that the time series is remaining stable.
Let's say we have the following monthly sales data for a retail store:
Month: January, February, March, April, May, June
Sales: 100, 150, 120, 180, 200, 160
By visualizing the data using a line plot, we can observe that there is an upward trend in the sales over time. This indicates that the sales are increasing.
## Exercise
Consider the following monthly sales data:
Month: January, February, March, April, May, June
Sales: 50, 70, 90, 80, 100, 120
Plot the sales data on a line plot to visualize the trend.
### Solution
```python
months = ['January', 'February', 'March', 'April', 'May', 'June']
sales = [50, 70, 90, 80, 100, 120]
plt.plot(months, sales)
plt.xlabel('Month')
plt.ylabel('Sales')
plt.title('Monthly Sales')
plt.show()
```
The line plot will show the sales data over the months, allowing us to identify the trend.
# Seasonal forecasting methods
Seasonal forecasting methods are used to forecast time series data that exhibit regular patterns or cycles, known as seasonality. These methods take into account the recurring patterns and adjust the forecasts accordingly.
There are different seasonal forecasting methods, including seasonal naive method, seasonal exponential smoothing, and seasonal ARIMA. The seasonal naive method uses the value from the same season in the previous year as the forecast for the current season. Seasonal exponential smoothing extends simple exponential smoothing to include seasonality, while seasonal ARIMA models capture both the trend and seasonality in the data.
Seasonal forecasting methods are particularly useful for forecasting data that exhibit strong seasonality, such as sales data with regular peaks and troughs throughout the year.
To apply seasonal forecasting methods, it is important to identify the seasonality in the data. This can be done visually by plotting the data and observing any recurring patterns or cycles. Autocorrelation plots can also be used to identify the seasonality by measuring the correlation between the time series and its lagged values.
Once the seasonality is identified, it can be incorporated into the forecasting models to improve the accuracy of the forecasts. The models can then be used to generate forecasts for future time periods, taking into account the seasonal patterns.
Let's say we have the following monthly sales data for a retail store:
Month: January, February, March, April, May, June, July, August, September, October, November, December
Sales: 100, 150, 120, 180, 200, 160, 120, 100, 90, 120, 150, 200
By visualizing the data using a line plot, we can observe that there is a seasonal pattern in the sales data, with peaks in the summer months and troughs in the winter months. This indicates that the sales exhibit seasonality.
## Exercise
Consider the following monthly sales data:
Month: January, February, March, April, May, June, July, August, September, October, November, December
Sales: 50, 70, 90, 80, 100, 120, 110, 130, 120, 140, 160, 150
Plot the sales data on a line plot to visualize the seasonality.
### Solution
```python
months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
sales = [50, 70, 90, 80, 100, 120, 110, 130, 120, 140, 160, 150]
plt.plot(months, sales)
plt.xlabel('Month')
plt.ylabel('Sales')
plt.title('Monthly Sales')
plt.show()
```
The line plot will show the sales data over the months, allowing us to identify any seasonal patterns.
# Forecasting with ARIMA models
ARIMA (Autoregressive Integrated Moving Average) models are widely used in time series forecasting. They capture both the autoregressive (AR) and moving average (MA) components of a time series, as well as the integration (I) component, which deals with non-stationarity.
ARIMA models are particularly useful for forecasting time series data that exhibit trend and seasonality. They can capture the dependencies between past and future observations, as well as the impact of past forecast errors on future forecasts.
ARIMA models are defined by three parameters: p, d, and q. The p parameter represents the order of the autoregressive component, the d parameter represents the order of differencing required to make the time series stationary, and the q parameter represents the order of the moving average component.
To apply ARIMA models, it is important to identify the order of differencing required to make the time series stationary. This can be done visually by plotting the data and observing any trends or patterns. Augmented Dickey-Fuller (ADF) test can also be used to test for stationarity.
Once the order of differencing is determined, the next step is to identify the values of p and q. This can be done using autocorrelation function (ACF) and partial autocorrelation function (PACF) plots. ACF measures the correlation between the time series and its lagged values, while PACF measures the correlation between the time series and its lagged values after removing the effects of intermediate lags.
Let's say we have the following monthly sales data for a retail store:
Month: January, February, March, April, May, June, July, August, September, October, November, December
Sales: 100, 150, 120, 180, 200, 160, 120, 100, 90, 120, 150, 200
By visualizing the data using a line plot, we can observe that there is a trend and seasonality in the sales data. This indicates that an ARIMA model would be appropriate for forecasting.
## Exercise
Consider the following monthly sales data:
Month: January, February, March, April, May, June, July, August, September, October, November, December
Sales: 50, 70, 90, 80, 100, 120, 110, 130, 120, 140, 160, 150
Plot the sales data on a line plot to visualize the trend and seasonality.
### Solution
```python
months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
sales = [50, 70, 90, 80, 100, 120, 110, 130, 120, 140, 160, 150]
plt.plot(months, sales)
plt.xlabel('Month')
plt.ylabel('Sales')
plt.title('Monthly Sales')
plt.show()
```
The line plot will show the sales data over the months, allowing us to identify the trend and seasonality.
# Forecasting with machine learning techniques
Machine learning techniques are increasingly being used in forecasting due to their ability to capture complex patterns and relationships in data. These techniques can be used to build models that automatically learn from historical data and make accurate forecasts.
There are different machine learning techniques that can be used for forecasting, including linear regression, decision trees, random forests, and neural networks. These techniques can handle both numerical and categorical data, and can capture non-linear relationships between the input variables and the target variable.
Machine learning techniques require a training phase, where the model is trained on historical data, and a testing phase, where the model is evaluated on new data. The performance of the model can be assessed using various evaluation metrics, such as mean absolute error (MAE) and root mean squared error (RMSE).
To apply machine learning techniques for forecasting, it is important to preprocess the data and select the appropriate features. Preprocessing steps can include data cleaning, normalization, and feature engineering. Feature selection involves selecting the most relevant features that have the most impact on the target variable.
Once the data is preprocessed and the features are selected, the next step is to train the machine learning model using the training data. The model can then be used to make forecasts for new data.
Let's say we have the following monthly sales data for a retail store:
Month: January, February, March, April, May, June, July, August, September, October, November, December
Sales: 100, 150, 120, 180, 200, 160, 120, 100, 90, 120, 150, 200
We can use a machine learning technique, such as linear regression, to build a model that predicts the sales based on other variables, such as advertising expenses, promotions, and seasonality.
## Exercise
Consider the following monthly sales data:
Month: January, February, March, April, May, June, July, August, September, October, November, December
Sales: 50, 70, 90, 80, 100, 120, 110, 130, 120, 140, 160, 150
Identify one or more variables that could be used to predict the sales and explain why they are relevant.
### Solution
One variable that could be used to predict the sales is advertising expenses. It is relevant because it is expected to have an impact on the sales. By increasing the advertising expenses, the store can potentially attract more customers and increase the sales.
# Integrating external factors into forecasting
Integrating external factors into forecasting involves considering additional variables that may impact the target variable being forecasted. These external factors can include economic indicators, weather conditions, social trends, and other relevant factors that may influence the forecast.
By incorporating external factors into the forecasting model, we can improve the accuracy and reliability of the forecasts. This is because these factors can provide valuable insights into the underlying dynamics and drivers of the target variable.
There are different approaches to integrating external factors into forecasting. One common approach is to include these factors as additional input variables in the forecasting model. For example, if we are forecasting sales for a retail store, we can include variables such as consumer sentiment index, unemployment rate, and average temperature as inputs to the model.
Another approach is to use external factors as indicators or proxies for the target variable. For example, if we are forecasting demand for a product, we can use the number of online searches for that product as an indicator of future demand.
Let's consider the example of forecasting demand for a ride-sharing service. In addition to historical ride data, we can include external factors such as weather conditions, events happening in the city, and public transportation schedules as inputs to the forecasting model. By considering these factors, we can better predict the demand for the ride-sharing service and make more informed decisions.
## Exercise
Consider the following scenario: You are forecasting sales for a clothing store. Identify two external factors that could potentially impact the sales and explain why they are relevant.
### Solution
Two external factors that could impact the sales of a clothing store are:
1. Seasonality: Seasonal changes in weather and fashion trends can significantly impact the demand for clothing. For example, sales of winter clothing are likely to be higher during the colder months, while sales of summer clothing may increase during the warmer months.
2. Economic conditions: Economic factors such as consumer confidence, unemployment rate, and disposable income can influence consumer spending on clothing. During periods of economic growth and stability, consumers may be more willing to spend on clothing, while during economic downturns, consumers may cut back on discretionary purchases.
# Forecasting in practice: case studies and best practices
Case studies provide us with a practical understanding of how forecasting techniques and methods are applied in different industries and contexts. We will analyze the challenges faced, the forecasting methods used, and the outcomes achieved. This will help us understand the nuances and complexities of forecasting in practice.
Best practices in forecasting are established guidelines and strategies that have been proven to be effective in achieving accurate and reliable forecasts. These practices are based on years of research and experience in the field of forecasting. By following these best practices, we can improve the quality of our forecasts and make more informed decisions.
One case study we will examine is the forecasting of demand for a new product launch. We will explore the challenges faced by a company in predicting the demand for a new smartphone model and the forecasting methods used to overcome these challenges. By understanding the factors that influence demand and the forecasting techniques employed, we can learn valuable lessons for our own forecasting projects.
## Exercise
Think of a real-world scenario where accurate forecasting is crucial. Identify the key challenges that could arise in forecasting for this scenario and propose a potential forecasting method to address these challenges.
### Solution
One scenario where accurate forecasting is crucial is in the airline industry for flight demand. The key challenges in forecasting flight demand include seasonality, unpredictable events (such as weather disruptions or geopolitical events), and the impact of external factors (such as economic conditions or travel restrictions).
To address these challenges, a potential forecasting method could be a combination of time series analysis and machine learning techniques. Time series analysis can capture the seasonality and trends in flight demand, while machine learning algorithms can incorporate external factors and adapt to changing conditions. By combining these approaches, airlines can make more accurate forecasts and optimize their operations accordingly.
|
Textbooks
|
31 July, 2018 The slide file was uploaded.
law of the iterated logarithm.
Journal of Logic and Analysis, Vol 10, pp.1–13, 2018.
We consider the behaviour of Schnorr randomness, a randomness notion weaker than Martin-L\"of's, for left-r.e. reals under Solovay reducibility. Contrasting with results on Martin-L\"of-randomenss, we show that Schnorr randomness is not upward closed in the Solovay degrees. Next, some left-r.e. Schnorr random $\alpha$ is the sum of two left-r.e. reals that are far from random. We also show that the left-r.e. reals of effective dimension $>r$, for some rational $r$, form a filter in the Solovay degrees.
1 Mar, 2018 The slide file was uploaded.
|
CommonCrawl
|
Exponential decay of Lebesgue numbers
On stacked central configurations of the planar coorbital satellites problem
October 2012, 32(10): 3733-3771. doi: 10.3934/dcds.2012.32.3733
Bounds on the growth of high Sobolev norms of solutions to 2D Hartree equations
Vedran Sohinger 1,
Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139, United States
Received May 2010 Revised January 2012 Published May 2012
In this paper, we consider Hartree-type equations on the two-dimensional torus and on the plane. We prove polynomial bounds on the growth of high Sobolev norms of solutions to these equations. The proofs of our results are based on the adaptation to two dimensions of the techniques we had previously used in [49, 50] to study the analogous problem in one dimension. Since we are working in two dimensions, a more detailed analysis of the resonant frequencies is needed, as was previously used in the work of Colliander-Keel-Staffilani-Takaoka-Tao [19].
Keywords: growth of high Sobolev norms, resonant decomposition., Hartree equation, nonlinear Schrödinger equation.
Mathematics Subject Classification: 35Q5.
Citation: Vedran Sohinger. Bounds on the growth of high Sobolev norms of solutions to 2D Hartree equations. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3733-3771. doi: 10.3934/dcds.2012.32.3733
D. Benney and A. Newell, Random wave closures,, Stud. Appl. Math., 48 (1969), 29. Google Scholar
D. Benney and P. Saffman, Nonlinear interactions of random waves in a dispersive medium,, Proc. Roy. Soc. A, 289 (1966), 301. doi: 10.1098/rspa.1966.0013. Google Scholar
J. Bourgain, Fourier transform restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations. I. Schrödinger equations,, Geom. Funct. Anal., 3 (1993), 107. Google Scholar
J. Bourgain, On the growth in time of higher Sobolev norms of smooth solutions of Hamiltonian PDE,, Int. Math. Research Notices, 1996 (): 277. Google Scholar
J. Bourgain, Refinements of Strichartz's inequality and applications to 2D-NLS with critical nonlinearity,, Int. Math. Research Notices, 1998 (): 253. Google Scholar
J. Bourgain, "Nonlinear Schrödinger Equations,", in, 5 (1999), 3. Google Scholar
J. Bourgain, Global solutions of nonlinear Schrödinger equations,, AMS Colloquium Publications, 46 (1999). Google Scholar
J. Bourgain, On growth of Sobolev norms in linear Schrödinger equations with smooth time dependent potential,, J. Anal. Math., 77 (1999), 315. doi: 10.1007/BF02791265. Google Scholar
N. Burq, P. Gérard and N. Tzvetkov, An instability property of the nonlinear Schrödinger equation on $S^d$,, Mathematical Research Letters, 9 (2002), 323. Google Scholar
N. Burq, P. Gérard and N. Tzvetkov, Bilinear eigenfunction estimates and the nonlinear Schrödinger equation on surfaces,, Invent. Math., 159 (2005), 187. doi: 10.1007/s00222-004-0388-x. Google Scholar
F. Catoire and W.-M. Wang, Bounds on Sobolev norms for the nonlinear Schrödinger equation on general tori,, preprint, (2008). Google Scholar
T. Cazenave, "Semilinear Schrödinger Equations,", Courant Lecture Notes in Mathematics, 10 (2003). Google Scholar
J. Colliander, J.-M. Delort, C. E. Kenig and G. Staffilani, Bilinear estimates and applications to 2D NLS,, Trans. of the American Math. Soc., 353 (2001), 3307. doi: 10.1090/S0002-9947-01-02760-X. Google Scholar
J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Global well-posedness for Schrödinger equations with derivative,, SIAM J. Math. Anal., 33 (2001), 649. doi: 10.1137/S0036141001384387. Google Scholar
J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Polynomial upper bounds for the orbital instability of the 1D cubic NLS below the energy norm,, Discrete Contin. Dyn. Syst., 9 (2003), 31. Google Scholar
J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Sharp global well-posedness for KdV and modified KdV on $\mathbbR$ and $\mathbbT$,, J. Amer. Math. Soc., 16 (2003), 705. doi: 10.1090/S0894-0347-03-00421-1. Google Scholar
J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Multilinear estimates for periodic KdV equations, and applications,, J. Funct. Anal., 211 (2004), 173. doi: 10.1016/S0022-1236(03)00218-0. Google Scholar
J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, A refined global well-posedness for Schrödinger equations with derivative,, SIAM J. Math. Anal., 34 (2002), 64. doi: 10.1137/S0036141001394541. Google Scholar
J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Resonant decompositions and the I-method for cubic nonlinear Schrödinger equation on $\mathbbR^2$,, Disc. and Cont. Dynam. Sys., 21 (2008), 665. doi: 10.3934/dcds.2008.21.665. Google Scholar
J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Global well-posedness and scattering for the energy-critical nonlinear Schrödinger equation in $\mathbbR^3$,, Ann. of Math. (2), 167 (2008), 767. doi: 10.4007/annals.2008.167.767. Google Scholar
J. Colliander, M. Keel, G. Staffilani, H. Takaoka and T. Tao, Transfer of energy to high frequencies in the cubic nonlinear Schrödinger equation,, Invent. Math., 181 (2010), 39. doi: 10.1007/s00222-010-0242-2. Google Scholar
J.-M. Delort, Growth of Sobolev norms of solutions of linear Schrödinger equations on some compact manifolds,, preprint, 2010 (): 2305. Google Scholar
B. Dodson, Global well-posedness and scattering for the defocusing, $L^2 $- critical, nonlinear Schrödinger equation when $d \geq 3$,, preprint, (2009). Google Scholar
B. Dodson, Global well-posedness and scattering for the defocusing, $L^2 $- critical, nonlinear Schrödinger equation when $d=2$,, preprint, (2010). Google Scholar
J. Duoandikoetxea, "Fourier Analysis,", Graduate Studies in Mathematics, 29 (2001). Google Scholar
J. Fröhlich and E. Lenzmann, "Mean-Field Limit of Quantum Bose Gases and Nonlinear Hartree Equation,", Séminaire: É.D.P. 2003-2004, (2004), 2003. Google Scholar
J. Ginibre and T. Ozawa, Long-range scattering for non-linear Schrödinger and Hartree equations in space dimension $n\geq 2$,, Comm. Math. Phys., 151 (1993), 619. doi: 10.1007/BF02097031. Google Scholar
J. Ginibre and G. Velo, On a class of nonlinear Schrödinger equations. I. The Cauchy problem, general case,, J. Funct. Anal., 32 (1979), 1. doi: 10.1016/0022-1236(79)90076-4. Google Scholar
J. Ginibre and G. Velo, On a class of nonlinear Schrödinger equations with nonlocal interaction,, Math. Z., 170 (1980), 109. doi: 10.1007/BF01214768. Google Scholar
J. Ginibre and G. Velo, Scattering theory in the energy space for a class of Hartree equations,, Rev. Math. Phys., 12 (2000), 361. doi: 10.1142/S0129055X00000137. Google Scholar
J. Ginibre and G. Velo, Long range scattering and modified wave operators for some Hartree type equations. II,, Ann. H. P., 1 (2000), 753. Google Scholar
J. Ginibre and G. Velo, Long range scattering and modified wave operators for some Hartree type equations. III. Gevrey spaces and low dimensions,, J. Diff. Eq., 175 (2001), 415. doi: 10.1006/jdeq.2000.3969. Google Scholar
A. Grünrock, On the Cauchy- and periodic boundary value problem for a certain class of derivative nonlinear Schrödinger equations,, preprint, (2006). Google Scholar
, Z. Hani,, Private communication., (). Google Scholar
N. Hayashi, P. Naumkin and T. Ozawa, "Scattering Theory for the Hartree Equation,", Hokkaido University Preprints, (1996). Google Scholar
C. Kenig, G. Ponce and L. Vega, The Cauchy problem for the Korteweg-de Vries equation in Sobolev spaces of negative indices,, Duke Math. J., 71 (1993), 1. Google Scholar
C. Kenig, G. Ponce and L. Vega, Quadratic forms for the 1-D semilinear Schrödinger equation,, Transactions of the AMS, 348 (1996), 3323. doi: 10.1090/S0002-9947-96-01645-5. Google Scholar
C. Miao, Y. Wu and G. Xu, Dynamics for the focusing, energy-critical nonlinear Hartree equation,, preprint, (2011). Google Scholar
C. Miao, G. Xu and L. Zhao, Global well-posedness and scattering for the energy critical, defocusing Hartree equation for radial data,, J. Funct. Anal., 253 (2007), 605. doi: 10.1016/j.jfa.2007.09.008. Google Scholar
C. Miao, G. Xu and L. Zhao, The Cauchy problem for the Hartree equation,, J. PDEs, 21 (2008), 22. Google Scholar
C. Miao, G. Xu and L. Zhao, Global well-posedness, scattering, and blow-up for the energy critical, focusing Hartree equation in the radial case,, Coll. Math., 114 (2009), 213. doi: 10.4064/cm114-2-5. Google Scholar
C. Miao, G. Xu and L. Zhao, Global well-posedness and scattering for the defocusing $H^{\frac{1}{2}}$ -subcritical Hartree equation on $\mathbbR^d$,, Ann. I. H. Poincaré, 26 (2009), 1831. Google Scholar
C. Miao, G. Xu and L. Zhao, Global well-posedness and scattering for the energy-critical, defocusing Hartree equation in $\mathbbR^{1+n}$,, Comm. PDEs, 36 (2011), 729. doi: 10.1080/03605302.2010.531073. Google Scholar
C. Morawetz and W. A. Strauss, Decay and scattering of solutions of a nonlinear relativistic wave equation,, Comm. Pure. Appl. Math., 25 (1972), 1. doi: 10.1002/cpa.3160250103. Google Scholar
B. Schlein, "Derivation of Effective Evolution Equations from Microscopic Quantum Dynamics,", Lecture Notes, (2008). Google Scholar
C. Sogge, Oscillatory integrals and spherical harmonics,, Duke Math. Jour., 53 (1986), 43. doi: 10.1215/S0012-7094-86-05303-2. Google Scholar
C. Sogge, Concerning the $\ell^p$ norm of spectral clusters for second order elliptic operators on compact manifolds,, Jour. of Funct. Anal., 77 (1988), 123. doi: 10.1016/0022-1236(88)90081-X. Google Scholar
V. Sohinger, Bounds on the growth of high Sobolev norms of solutions to Nonlinear Schrödinger Equations on $S^1$,, to appear in Diff. and Int. Eqs., (2010). Google Scholar
V. Sohinger, Bounds on the growth of high Sobolev norms of solutions to nonlinear Schrödinger equations on $\mathbbR$,, to appear in Indiana Univ. Math. J., (2010). Google Scholar
G. Staffilani, On the growth of high Sobolev norms of solutions for KdV and Schrödinger equations,, Duke Math. J., 86 (1997), 109. doi: 10.1215/S0012-7094-97-08604-X. Google Scholar
G. Staffilani, Quadratic forms for a 2-D semilinear Schrödinger equation,, Duke Math. J., 86 (1997), 79. doi: 10.1215/S0012-7094-97-08603-8. Google Scholar
T. Tao, "Nonlinear Dispersive Equations: Local and Global Analysis,", CBMS Reg. Conf. Series in Math., 106 (2006). Google Scholar
V. E. Zakharov, Stability of periodic waves of finite amplitude on a surface of deep fluid,, J. Appl. Mech. Tech. Phys., 9 (1968), 190. Google Scholar
S.-J. Zhong, The growth in time of higher Sobolev norms of solutions to Schrödinger equations on compact Riemannian manifolds,, J. Differential Equations, 245 (2008), 359. doi: 10.1016/j.jde.2008.03.008. Google Scholar
Myeongju Chae, Soonsik Kwon. The stability of nonlinear Schrödinger equations with a potential in high Sobolev norms revisited. Communications on Pure & Applied Analysis, 2016, 15 (2) : 341-365. doi: 10.3934/cpaa.2016.15.341
Joackim Bernier. Bounds on the growth of high discrete Sobolev norms for the cubic discrete nonlinear Schrödinger equations on $ h\mathbb{Z} $. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3179-3195. doi: 10.3934/dcds.2019131
F. Catoire, W. M. Wang. Bounds on Sobolev norms for the defocusing nonlinear Schrödinger equation on general flat tori. Communications on Pure & Applied Analysis, 2010, 9 (2) : 483-491. doi: 10.3934/cpaa.2010.9.483
Dario Bambusi, A. Carati, A. Ponno. The nonlinear Schrödinger equation as a resonant normal form. Discrete & Continuous Dynamical Systems - B, 2002, 2 (1) : 109-128. doi: 10.3934/dcdsb.2002.2.109
Walid K. Abou Salem, Xiao Liu, Catherine Sulem. Numerical simulation of resonant tunneling of fast solitons for the nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1637-1649. doi: 10.3934/dcds.2011.29.1637
Seckin Demirbas. Local well-posedness for 2-D Schrödinger equation on irrational tori and bounds on Sobolev norms. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1517-1530. doi: 10.3934/cpaa.2017072
Binhua Feng, Xiangxia Yuan. On the Cauchy problem for the Schrödinger-Hartree equation. Evolution Equations & Control Theory, 2015, 4 (4) : 431-445. doi: 10.3934/eect.2015.4.431
François Genoud. Existence and stability of high frequency standing waves for a nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1229-1247. doi: 10.3934/dcds.2009.25.1229
Jianqing Chen. Sharp variational characterization and a Schrödinger equation with Hartree type nonlinearity. Discrete & Continuous Dynamical Systems - S, 2016, 9 (6) : 1613-1628. doi: 10.3934/dcdss.2016066
J. Colliander, M. Keel, Gigliola Staffilani, H. Takaoka, T. Tao. Resonant decompositions and the $I$-method for the cubic nonlinear Schrödinger equation on $\mathbb{R}^2$. Discrete & Continuous Dynamical Systems - A, 2008, 21 (3) : 665-686. doi: 10.3934/dcds.2008.21.665
D.G. deFigueiredo, Yanheng Ding. Solutions of a nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2002, 8 (3) : 563-584. doi: 10.3934/dcds.2002.8.563
Felipe Hernandez. A decomposition for the Schrödinger equation with applications to bilinear and multilinear estimates. Communications on Pure & Applied Analysis, 2018, 17 (2) : 627-646. doi: 10.3934/cpaa.2018034
Benjamin Dodson. Global well-posedness and scattering for the defocusing, cubic nonlinear Schrödinger equation when $n = 3$ via a linear-nonlinear decomposition. Discrete & Continuous Dynamical Systems - A, 2013, 33 (5) : 1905-1926. doi: 10.3934/dcds.2013.33.1905
Miaomiao Niu, Zhongwei Tang. Least energy solutions for nonlinear Schrödinger equation involving the fractional Laplacian and critical growth. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3963-3987. doi: 10.3934/dcds.2017168
Hristo Genev, George Venkov. Soliton and blow-up solutions to the time-dependent Schrödinger-Hartree equation. Discrete & Continuous Dynamical Systems - S, 2012, 5 (5) : 903-923. doi: 10.3934/dcdss.2012.5.903
Pavel I. Naumkin, Isahi Sánchez-Suárez. On the critical nongauge invariant nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 807-834. doi: 10.3934/dcds.2011.30.807
Younghun Hong. Scattering for a nonlinear Schrödinger equation with a potential. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1571-1601. doi: 10.3934/cpaa.2016003
Alexander Komech, Elena Kopylova, David Stuart. On asymptotic stability of solitons in a nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1063-1079. doi: 10.3934/cpaa.2012.11.1063
Tarek Saanouni. Remarks on the damped nonlinear Schrödinger equation. Evolution Equations & Control Theory, 2019, 0 (0) : 0-0. doi: 10.3934/eect.2020030
Milena Stanislavova, Atanas Stefanov. Effective estimates of the higher Sobolev norms for the Kuramoto-Sivashinsky equation. Conference Publications, 2009, 2009 (Special) : 729-738. doi: 10.3934/proc.2009.2009.729
Vedran Sohinger
|
CommonCrawl
|
\begin{definition}[Definition:Covariance]
Let $X$ and $Y$ be random variables.
Let $\mu_X = \expect X$ and $\mu_Y = \expect Y$, the expectations of $X$ and $Y$ respectively, exist and be finite.
Then the '''covariance''' of $X$ and $Y$ is defined by:
:$\cov {X, Y} = \expect {\paren {X - \mu_X} \paren {Y - \mu_Y} }$
where this expectation exists.
\end{definition}
|
ProofWiki
|
Tetractys
The tetractys (Greek: τετρακτύς), or tetrad,[1] or the tetractys of the decad[2] is a triangular figure consisting of ten points arranged in four rows: one, two, three, and four points in each row, which is the geometrical representation of the fourth triangular number. As a mystical symbol, it was very important to the secret worship of Pythagoreanism. There were four seasons, and the number was also associated with planetary motions and music.[3]
Pythagorean symbol
1. The first four numbers symbolize the musica universalis and the Cosmos as:
1. Monad – Unity
2. Dyad – Power – Limit/Unlimited (peras/apeiron)
3. Triad – Harmony
4. Tetrad – Kosmos[4]
2. The four rows add up to ten, which was unity of a higher order (The Dekad).
3. The Tetractys symbolizes the four classical elements—air, fire, water, and earth.
4. The Tetractys represented the organization of space:
1. the first row represented zero dimensions (a point)
2. the second row represented one dimension (a line of two points)
3. the third row represented two dimensions (a plane defined by a triangle of three points)
4. the fourth row represented three dimensions (a tetrahedron defined by four points)
A prayer of the Pythagoreans shows the importance of the Tetractys (sometimes called the "Mystic Tetrad"), as the prayer was addressed to it.
Bless us, divine number, thou who generated gods and men! O holy, holy Tetractys, thou that containest the root and source of the eternally flowing creation! For the divine number begins with the profound, pure unity until it comes to the holy four; then it begets the mother of all, the all-comprising, all-bounding, the first-born, the never-swerving, the never-tiring holy ten, the keyholder of all.[5]
As a portion of the secret religion, initiates were required to swear a secret oath by the Tetractys. They then served as novices, which required them to observe silence for a period of five years.
The Pythagorean oath also mentioned the Tetractys:
By that pure, holy, four lettered name on high,
nature's eternal fountain and supply,
the parent of all souls that living be,
by him, with faith find oath, I swear to thee.
It is said[6][7][8] that the Pythagorean musical system was based on the Tetractys as the rows can be read as the ratios of 4:3 (perfect fourth), 3:2 (perfect fifth), 2:1 (octave), forming the basic intervals of the Pythagorean scales. That is, Pythagorean scales are generated from combining pure fourths (in a 4:3 relation), pure fifths (in a 3:2 relation), and the simple ratios of the unison 1:1 and the octave 2:1. Note that the diapason, 2:1 (octave), and the diapason plus diapente, 3:1 (compound fifth or perfect twelfth), are consonant intervals according to the tetractys of the decad, but that the diapason plus diatessaron, 8:3 (compound fourth or perfect eleventh), is not.[9][10]
The Tetractys [also known as the decad] is an equilateral triangle formed from the sequence of the first ten numbers aligned in four rows. It is both a mathematical idea and a metaphysical symbol that embraces within itself—in seedlike form—the principles of the natural world, the harmony of the cosmos, the ascent to the divine, and the mysteries of the divine realm. So revered was this ancient symbol that it inspired ancient philosophers to swear by the name of the one who brought this gift to humanity.
Kabbalist symbol
In the work by anthropologist Raphael Patai entitled The Hebrew Goddess, the author argues that the tetractys and its mysteries influenced the early Kabbalah.[11] A Hebrew tetractys has the letters of the Tetragrammaton inscribed on the ten positions of the tetractys, from right to left. It has been argued that the Kabbalistic Tree of Life, with its ten spheres of emanation, is in some way connected to the tetractys, but its form is not that of a triangle. The occultist Dion Fortune writes:
The point is assigned to Kether;
the line to Chokmah;
the two-dimensional plane to Binah;
consequently the three-dimensional solid naturally falls to Chesed.[12]
The relationship between geometrical shapes and the first four Sephirot is analogous to the geometrical correlations in Tetraktys, shown above under #Pythagorean symbol, and unveils the relevance of the Tree of Life with the Tetraktys.
Tarot card reading arrangement
In a Tarot reading, the various positions of the tetractys provide a representation for forecasting future events by signifying according to various occult disciplines, such as Alchemy.[13] Below is only a single variation for interpretation.
The first row of a single position represents the Premise of the reading, forming a foundation for understanding all the other cards.
The second row of two positions represents the cosmos and the individual and their relationship.
• The Light Card to the right represents the influence of the cosmos leading the individual to an action.
• The Dark Card to the left represents the reaction of the cosmos to the actions of the individual.
The third row of three positions represents three kinds of decisions an individual must make.
• The Creator Card is rightmost, representing new decisions and directions that may be made.
• The Sustainer Card is in the middle, representing decisions to keep balance, and things that should not change.
• The Destroyer Card is leftmost, representing old decisions and directions that should not be continued.
The fourth row of four positions represents the four Greek elements.
• The Fire card is rightmost, representing dynamic creative force, ambitions, and personal will.
• The Air card is to the right middle, representing the mind, thoughts, and strategies toward goals.
• The Water card is to the left middle, representing the emotions, feelings, and whims.
• The Earth card is leftmost, representing physical realities of day to day living.
Occurrence
The tetractys occurs (generally coincidentally) in the following:
• the baryon decuplet
• an archbishop's coat of arms
• the arrangement of bowling pins in ten-pin bowling
• the arrangement of billiard balls in ten-ball pool
• a Chinese checkers board
• the "Christmas Tree" formation in association football
In poetry
In English-language poetry, a tetractys is a syllable-counting form with five lines. The first line has one syllable, the second has two syllables, the third line has three syllables, the fourth line has four syllables, and the fifth line has ten syllables.[14] A sample tetractys would look like this:
Mantrum
Your /
fury /
confuses /
us all greatly. /
Volatile, big-bodied tots are selfish. //
The tetractys was created by Ray Stebbing, who said the following about his newly created form:
"The tetractys could be Britain's answer to the haiku. Its challenge is to express a complete thought, profound or comic, witty or wise, within the narrow compass of twenty syllables.[15]
See also
• Pascal's triangle
References
1. The Theosophical Glossary, Forgotten Books, p. 302, ISBN 9781440073915
2. Eduard Zeller. Outlines of the History of Greek Philosophy (13 ed.). p. 36.
3. Dimitra Karamanides (2005), Pythagoras: pioneering mathematician and musical theorist of Ancient Greece, The Rosen Publishing Group, p. 65, ISBN 9781404205000
4. The Pythagorean Sourcebook and Library by Kenneth Sylvan Guthrie
5. Dantzig, Tobias ([1930], 2005) Number. The Language of Science. p. 42
6. Introduction to Arithmetic – Nicomachus
7. Bruhn, Siglind (2005), The Musical Order of the World: Kepler, Hesse, Hindemith-Siglind Bruhn, Pendragon Press, ISBN 9781576471173
8. A Dictionary of Greek and Roman Antiquities(1890) – William Smith, LLD, William Wayte, G. E. Marindin, Ed.
9. Plutarch, De animae procreatione in Timaeo – Goodwin, Ed.(lang.:English)
10. Pennick, Nigel (January 2012), Sacred Architecture of London – Nigel Pennick, Aeon Books, ISBN 9781904658627
11. Patai, Raphael (1967). The Hebrew Goddess. Wayne State University Press. ISBN 0-8143-2271-9. - Chapter V - The Kabbalistic Tetrad
12. The Mystical Qabalah, Dion Fortune, Chapter XVIII, 25
13. "Tetractys Spread".
14. English Syllable Counters
15. Search result for Tetractys
Further reading
• von Franz, Marie-Louise. Number and Time: Reflections Leading Towards a Unification of Psychology and Physics. Rider & Company, London, 1974. ISBN 0-09-121020-8
• Fideler, D. ed. The Pythagorean Sourcebook and Library Archived 2015-05-09 at the Wayback Machine. Phanes Press, 1987.
• The Theoretic Arithmetic of the Pythagoreans – Thomas Taylor
External links
• Examples of Tetractys poems
Ancient Greek philosophical concepts
• Adiaphora (indifferent)
• Apeiron (infinite)
• Aporia (problem)
• Arche (first principle)
• Arete (excellence)
• Ataraxia (tranquility)
• Cosmos (order)
• Diairesis (division)
• Doxa (opinion)
• Episteme (knowledge)
• Ethos (character)
• Eudaimonia (flourishing)
• Logos (reason)
• Mimesis (imitation)
• Monad (unit)
• Nous (intellect)
• Ousia (substance)
• Pathos (passion)
• Phronesis (prudence)
• Physis (nature)
• Sophia (wisdom)
• Sophrosyne (temperance)
• Techne (craft)
• Telos (goal)
• Thumos (temper)
|
Wikipedia
|
Switching nanoprecipitates to resist hydrogen embrittlement in high-strength aluminum alloys
Atomic-scale insights on hydrogen trapping and exclusion at incoherent interfaces of nanoprecipitates in martensitic steels
Binglu Zhang, Qisi Zhu, … Lijie Qiao
Hydrogen-accelerated spontaneous microcracking in high-strength aluminium alloys
Tomohito Tsuru, Kazuyuki Shimizu, … Hiroyuki Toda
An unconventional hydrogen effect that suppresses thermal formation of the hcp phase in fcc steels
Motomichi Koyama, Kenji Hirata, … Kaneaki Tsuzaki
Chemical heterogeneity enhances hydrogen resistance in high-strength steels
Binhan Sun, Wenjun Lu, … Dierk Raabe
Hydrogenation treatment under several gigapascals assists diffusionless transformation in a face-centered cubic steel
Motomichi Koyama, Hiroyuki Saitoh, … Eiji Akiyama
Advanced superhard composite materials with extremely improved mechanical strength by interfacial segregation of dilute dopants
Tomohiro Nishi, Katsuyuki Matsunaga, … Yusuke Katsu
Revisiting stress-corrosion cracking and hydrogen embrittlement in 7xxx-Al alloys at the near-atomic-scale
Martí López Freixes, Xuyang Zhou, … Baptiste Gault
Hierarchical nanostructured aluminum alloy with ultrahigh strength and large plasticity
Ge Wu, Chang Liu, … Jian Lu
Enhanced aging kinetics in Al-Mg-Si alloys by up-quenching
Florian Schmid, Philip Dumitraschkewitz, … Stefan Pogatscher
Yafei Wang ORCID: orcid.org/0000-0002-0535-767X1,2 na1,
Bhupendra Sharma ORCID: orcid.org/0000-0002-0009-46201 na1,
Yuantao Xu ORCID: orcid.org/0000-0003-1089-34821,3,
Kazuyuki Shimizu ORCID: orcid.org/0000-0002-9282-43864,
Hiro Fujihara1,
Kyosuke Hirayama5,
Akihisa Takeuchi ORCID: orcid.org/0000-0001-7693-99286,
Masayuki Uesugi ORCID: orcid.org/0000-0001-6261-90346,
Guangxu Cheng2 &
Hiroyuki Toda1
Nature Communications volume 13, Article number: 6860 (2022) Cite this article
Hydrogen drastically embrittles high-strength aluminum alloys, which impedes efforts to develop ultrastrong components in the aerospace and transportation industries. Understanding and utilizing the interaction of hydrogen with core strengthening elements in aluminum alloys, particularly nanoprecipitates, are critical to break this bottleneck. Herein, we show that hydrogen embrittlement of aluminum alloys can be largely suppressed by switching nanoprecipitates from the η phase to the T phase without changing the overall chemical composition. The T phase strongly traps hydrogen and resists hydrogen-assisted crack growth, with a more than 60% reduction in the areal fractions of cracks. The T phase-induced reduction in the concentration of hydrogen at defects and interfaces, which facilitates crack growth, primarily contributes to the suppressed hydrogen embrittlement. Transforming precipitates into strong hydrogen traps is proven to be a potential mitigation strategy for hydrogen embrittlement in aluminum alloys.
Aluminum alloys with high and ultrahigh strengths are highly attractive for weight-sensitive structures such as aircraft, bullet trains, and automobiles. However, unlike the significantly elevated strength of steels up to ~1.5 GPa or even higher levels1, the development of ultrastrong aluminum alloys has almost stagnated in the past decades, largely due to the strength-hydrogen embrittlement (HE) conflict: with increasing strength, hydrogen drastically reduces the ability of aluminum alloys to sustain plastic deformation and cyclic load2. Such an effect arises from the interaction of hydrogen atoms with various micro- or nanoscale structures, i.e., so-called hydrogen trap sites, including defects in lattices, e.g., dislocations3,4 and vacancies5,6, grain boundaries7,8, interfaces of micron-scale particles9,10 and interfaces of strengthening nanoprecipitates11. Through these pathways, a macroscopically premature fracture can be triggered, accompanied by internal quasi-cleavage and/or intergranular cracks (IGCs)12,13. The HE phenomenon was first reported for iron14 and evidenced for different metallic materials15,16, particularly for high-strength aluminum alloys17 due to the easy generation and ingress of hydrogen via aluminum-water reactions.
Despite the persistent debate on HE mechanisms, the search for mitigation strategies is crucially important to realize ultrahigh strengthening of aluminum alloys and prevent disastrous failures. A viable route for HE suppression is adding new chemical elements to the bulk material to develop deep trap sites that can attract a large number of hydrogen atoms via strong atomic bonds, so that the percentage of occupied hydrogen sites (hydrogen occupancy) at various defects and interfaces can be reduced based on the thermal equilibrium relationships among hydrogen trap sites18. With the advance of simulation techniques, the trapping capacity of candidate hydrogen traps can be theoretically explored, and their practical effects can be further examined in experiments. Following this route, successful examples have been reported for steels, including NbC precipitates with their strong hydrogen trapping effect predicted by first-principles simulations19 and experimentally verified by atom probe tomography (APT)20. Similarly, vanadium was found to be beneficial for resisting HE in steels21, which was rationalized by the hydrogen trapping at vanadium carbides seen in simulations22 and experiments23. Unfortunately, no ideal trap sites have been reported and confirmed for aluminum alloys until now.
One step forward in this direction is the newly reported beneficial effects of intermetallic compound particles in trapping hydrogen and reducing quasi-cleavage cracks in high-strength aluminum alloys24,25. However, most coarse particles are intrinsically brittle26, and their dynamic hydrogen trapping ability is weakened by their large size since full trapping of hydrogen atoms within μm-sized particles in a short period of time is difficult; instead, hydrogen can rapidly diffuse to extremely small-sized (down to several layers of atoms) stressed regions and readily facilitate crack growth27. This necessitates the search for effective nanosized structures for hydrogen trapping and HE mitigation in aluminum alloys. Typically, high-strength aluminum alloys are developed by age-hardening methods, which involve heat treatment at a specific temperature to induce a transformation from soluble elements in a supersaturated solid solution to insoluble phases. These insoluble nanoparticles, so-called precipitates, impede the movement of lattice defects and lead to high strength of aluminum alloys. It is thereby believed that the basis of HE mitigation methods must lie in the modification of hydrogen-precipitate interactions to solve the strength-HE conflict.
Here, we show that nanosized age-hardening precipitates, widely available and serving as core-strengthening elements in high-strength aluminum alloys, can be switched into strong hydrogen trap sites and contribute to elevated HE resistance. Bearing in mind the detrimental effects of η precipitates (and their numerous variants, here unified as η-MgZn2) in HE due to the risk of interfacial debonding11, we show the strong and beneficial hydrogen trapping effect of T (Al2Mg3Zn3) precipitates. We further investigate the effectiveness of these precipitates in the control of HE and related mechanisms by advanced characterization techniques, taking a typical Al-Mg-Zn-Cu aluminum alloy as a model material.
The quaternary 7XXX Al alloys with a chemical composition of Al-5.6Zn-2.5Mg-1.6Cu (wt%) were prepared such that the η phase was partially switched to the T phase through modification of the aging parameters (details shown in Methods and Supplementary Fig. 1) without changing its overall chemical composition and texture. We hypothesized a partial transformation from η to T when the aging temperature was elevated from low temperature (LT) to high temperature (HT), which was confirmed by transmission electron microscopy (TEM) and selected area electron diffraction patterns, indicating fully η phase in the LT material (Fig. 1a, b) and a considerable amount of T phase in the HT material (Fig. 1c, d and Supplementary Fig. 2). The typical high-resolution morphology of a T precipitate is shown in Fig. 1e, exhibiting the near-spherical shape in contrast with the elongated η phase. More high-angle annular dark field-scanning transmission electron microscopy (HAADF-STEM) images and fast Fourier transform analyses are included in Supplementary Fig. 3, for the confirmation of the T phase. The first-principles simulation shown in Fig. 1f indicates excellent hydrogen trapping capacity in the interior of T precipitates with a maximum binding energy of 0.6 eV. The APT results in Fig. 1g to i provide compositional evidence of the T phase (with a maximum Zn + Mg concentration close to 80 at.%) in HT material28. The increase in aging temperature did not significantly alter the precipitate size (with an average diameter lower than 7 nm in both), which enables evaluation of the HE sensitivities of these two materials with similar tensile strengths and precipitate coherency29,30, although some of the coarse η precipitates in HT material are semi-coherent (Supplementary Fig. 4).
Fig. 1: Morphologies of precipitates and hydrogen trapping at T phase in Al-Mg-Zn-Cu alloys.
a TEM image and b diffraction pattern of η phase in the LT material; c TEM image and d fast Fourier transform of T phase in the HT material. e magnified HAADF-STEM image of the T phase. All images were taken along the [110]Al zone axis. f distribution of hydrogen trapping energies in the interior of the T phase with a maximum energy of 0.6 eV predicted by first-principles calculations. g element maps for Mg, Zn, and Cu with 6, 6, and 2% iso-surfaces obtained by APT. h, i cross-sectional atom percentages of Zn, Mg, and Cu in T and η precipitates.
In situ tensile tests under synchrotron X-ray tomography (see Supplementary Fig. 5 for the experimental setup) indicated significantly improved ductility due to the change in nanoprecipitates (Fig. 2a). At the same hydrogen content level (Fig. 2b), the presence of the T phase resulted in a 38% increase in the fracture strain and a more than 60% reduction in the areal fractions of IGCs on the fracture surface. 4D observations directly proved the significantly reduced growth rate of the main crack in the presence of the T phase in terms of both areal fraction and crack length in 2D projections (Fig. 2c). In the last loading step, the areal fraction of the main crack decreased from 45% to less than 10%, and the crack length decreased from more than 500 to 200 μm. In contrast to the hydrogen-induced fast grain boundary (GB) separation in the LT material (Fig. 2d, g), the main crack in the T phase-rich material remained almost stagnant with increasing applied strain until the final fracture (Fig. 2I, l). Scanning electron microscopy (SEM) images of fracture surfaces obtained from repeat tensile tests (Fig. 2h, m and Supplementary Fig. 6) demonstrated the T phase-induced transition from large-area IGCs to small-sized separate cracks. According to the decohesion mechanism31, hydrogen reduces the energy required to separate various interfaces (cohesive energy), including GBs, and hydrogen coverage at GBs controls the growth behavior of IGCs, such as the crack propagation velocity32. The intense stress field in the vicinity of a GB ahead of the crack tip, after the initiation of an IGC, attracts more hydrogen atoms toward it33, and the crack growth speed can be greatly affected by the initial hydrogen occupancy at the GBs.
Fig. 2: Mechanical properties and crack growth behavior.
a Stress–strain curves for LT and HT materials, with the inset figure showing the average areal fractions of IGCs on the fracture surfaces. Error bars are determined from repeat tensile tests. b Thermal desorption curves of hydrogen-charged LT and HT materials. c Dependence of areal fraction and crack length of IGCs on the applied strain for LT and HT materials. d–g 3D renderings of the IGCs in the LT material at an applied strain of εa = 8.9, 11.4, 14.0, and 17.8%, respectively. h SEM morphologies of the fracture surface for the LT material. i–l 3D renderings of the IGCs in the HT material at an applied strain of εa = 7.2, 9.6, 16.8, and 22.5%, respectively. m SEM morphologies of the fracture surface for the HT material.
Moreover, other mechanisms, particularly hydrogen-enhanced local plasticity, can also facilitate IGC fracture due to dislocation interaction with GBs, which alters the local stress and strain states, GB structure, and hydrogen distributions through dislocation accommodation, pile up, and penetration through GBs34,35. In some cases, the involvement of dislocations can even become a decisive factor for IGCs and act as the core HE mechanism. Thus, we directly measured the 3D strain distributions ahead of the crack tip by tracking the movement of numerous S phase (Al2MgCu) particles within the tensile material using the synchrotron-based imaging and microstructural features tracking techniques (see Methods), enabling nondestructive examination of the influence of hydrogen on the local plasticity at reasonable temporal and spatial resolutions. Figure 3a, c show the gradual accumulation of plastic strains in the LT material, which occurs in the regions both near and away from the crack tip, whereas the relative strain map (Fig. 3d) indicates obviously enhanced local straining at the crack tip when the macroscopic strain increases from 8.9 to 14.0%. With crack growth, the stress-driven hydrogen diffusion toward the crack tip can lead to enhanced dislocation mobility due to the hydrogen shielding effect4, provided sufficient hydrogen atoms are attached to dislocations, forming a protective atmosphere that allows them to move more easily in certain directions. Despite the debate on HE mechanisms, in terms of either how hydrogen interacts with dislocations36 or under what critical conditions the dislocation-based mechanism can dominate the final IGC fracture37, a lower HE sensitivity is anticipated if the hydrogen concentration at dislocations can be reduced. This is observed in the T phase-rich material, shown in Fig. 3e, h, indicating a strain distribution throughout the whole shear bands, instead of being concentrated at the crack tip.
Fig. 3: Evolution of strain distributions around the crack tip in LT and HT materials.
a–c High-density equivalent strain (εeqv) maps in the virtual x-z cross sections of the LT material at an applied strain of εa = 8.9, 11.4, and 14.0%, respectively, obtained from 3D particle tracking. d Relative equivalent strain map between εa of 8.9% and 14.0%, indicating severe strain localization at the tip of the main crack. e–g Equivalent strain maps in the virtual x-z cross sections of the HT material at an applied strain of εa = 9.6, 12.0, and 14.3%, respectively. h Relative equivalent strain map between εa of 9.6 and 14.3%, in which suppressed strain localization at the crack tip is observed due to the change in precipitates.
It is anticipated that the strong hydrogen trapping at the T phase effectively reduces hydrogen concentrations at other trap sites, including dislocations, GBs, vacancies, and nanosized η precipitates, which were shown to be sensitive to HE. The reduced hydrogen coverage at dislocations and GBs near the crack tip, due to the presence of numerous nanosized T precipitates, is expected to weaken hydrogen-enhanced local plasticity and, accordingly, its contribution to hydrogen-induced debonding at GBs. As a result, the cohesive energy of the GB can maintain at a sufficiently high level above the critical value required for separation. The whole process, which occurs in a limited nanoscale area27 near the GB, strongly depends on the local hydrogen partitioning among various hydrogen trap sites, including dislocations, vacancies, GBs, voids, and precipitates. This nanoscopic hydrogen partitioning behavior during crack growth was predicted, based on the experimentally measured 3D distributions of various trap sites, providing real insights into the role of the T phase in hydrogen trapping. In the plastic zone in front of the crack tip, various trap sites were visualized in 3D and incorporated into the calculation of 3D hydrogen distributions, which were established based on the thermal equilibrium among these trap sites. We focused on the regions with a size of 80 × 80 × 80 μm3 ahead of the main crack at an applied strain of 14.0–14.3% (Fig. 4a, c). Severe and complicated hydrogen-trap interactions are reasonably expected to occur within the plastic zone around the crack tip, particularly near the point with peak stress and hydrogen concentration. The stress localization can assist preferential particle damage and, accordingly, void initiation and growth in front of the crack, as can be observed in the 3D distributions of particles and micron-sized voids shown in Fig. 4a, c, which indicates predominant particle breakage in the plastic zone, whereas no obvious change in the void distribution caused by hydrogen was found. Simulations indicate hydrogen trapping in the interior of the S phase (see Methods), implying that the effect of hydrogen on IGCs by accelerating interfacial decohesion at particles is limited. Furthermore, the dislocation density distributions, including both the statistically stored dislocations, proportional to the equivalent plastic strain, and the geometrically necessary dislocations, proportional to the gradient of the equivalent plastic strain, were obtained from strain maps (Fig. 4b, d) and both of these kinds of dislocations have been suggested to affect HE by interacting with hydrogen at the crack tip38.
Fig. 4: Local hydrogen partitioning behavior around the crack tip in LT and HT materials.
a Virtual cross-section X-ray microtomography (XMT) image of the main crack at an applied strain of εa = 14.0% and 3D perspective view of voids and particles in the LT material. b Distribution of the total dislocation density at the same location as in a. c Virtual cross-section XMT image of the main crack at an applied strain of εa = 14.3% and 3D perspective view of voids and particles in the HT material. d Distribution of the total dislocation density at the same location as in c. e Comparison of hydrogen concentrations (CH) and occupancies at various trap sites, i.e., dislocations, GBs, vacancies, S phase, coherent η/Al interfaces, semi-coherent η/Al interfaces, T phase, and voids, in undeformed regions and ahead of the crack tip for the HT material. The bar height represents the mean value, whereas the error bars, which originate from the heterogeneous distribution of trap sites, mark the upper and lower bounds. f T phase-induced reduction in the hydrogen concentrations at dislocations, GBs, vacancies, and coherent η/Al interfaces in the plastic zone ahead of the crack tip, with the blue bars showing the regions far from voids and the red bars showing those adjacent to voids.
The distributions of the hydrogen concentration and occupancy at dislocations, GBs, vacancies, precipitates, particles, and microvoids in the material interior were quantitatively assessed from local partitioning calculations. Figure 4e shows that in the presence of the T phase, the majority of hydrogen atoms go to the T phase and voids in both undeformed regions and the plastic zone at the crack tip, which is believed to be the main cause of crack growth suppression. Notably, all the trap sites were included in the hydrogen partitioning calculation, with the error bars originating from the heterogeneous distribution of trap sites. The only factor that is difficult to inherently incorporate into the calculation is the stress effect on hydrogen diffusion due to the complex heterogeneous distribution of strong trap sites within the stress field. Nonetheless, we reasonably expect that the T-phase nanoprecipitates remain effective in hydrogen trapping even in the nanoscale highly stressed regions. Unlike the plate-shaped η phase and coarse intermetallic particles (such as the previously reported Fe-rich30 and Mn-rich25 particles, also shown to be effective in hydrogen trapping), the small size and spherical shape of T precipitates endow them with low-stress localization at sharp edges and excellent dynamic hydrogen trapping capacity in a transient hydrogen diffusion scenario. In the absence of neighboring voids, the hydrogen trapping effect in the T phase leads to a 2–3 orders of magnitude reduction in the hydrogen concentrations at dislocations, GBs, and vacancies (Fig. 4f), which improves the HE resistance through any of these likely pathways. In terms of dislocations, although hydrogen-enhanced dislocation nucleation and movement have been repeatedly captured by nanoscale in situ observations, typically in alloys lacking deep hydrogen traps39,40, it is believed that the hydrogen-enhanced local plasticity can be largely suppressed in the present case due to the richness of strong nanoscopic hydrogen traps, which instantly drive hydrogen diffusion away from dislocations despite the fast stress variation at the crack tip. In terms of vacancies, a large quantity of vacancies is generally considered to always be generated with the creation of dislocations, and vacancies occupied with a sufficiently high amount of hydrogen can favorably aggregate to form platelets on a specific crystallographic plane, acting as embryos for microvoids and cracks5. In terms of η phase precipitates, spontaneous debonding at the coherent η/Al interfaces (with a hydrogen binding energy of 0.3 eV) may also occur due to the stress-induced high hydrogen occupancy at the edge surfaces11, providing another likely pathway for HE, whereas no such phenomenon has been found at semi-coherent η/Al interfaces despite their high hydrogen trapping the energy of 0.56 eV30. All the processes occurring at dislocations, GBs, and coherent precipitate interfaces are expected to be suppressed by the strong hydrogen trapping effect of the T phase.
Our experiment-simulation combined study presents a realistic prediction of spatially and time-resolved hydrogen distributions during crack growth, providing a critical basis for clarifying the roles of different trap sites, particularly precipitates, in hydrogen-induced intergranular fracture. It is believed that the competition effect exists between the two types of strengthening nanoprecipitates: hydrogen trapping at coherent η/Al interfaces induces interfacial debonding (negative effect), while the hydrogen trapping within the T phase strongly suppresses it with a much higher binding energy (positive effect). The beneficial role of the T phase in HE suppression originates from its high hydrogen trapping capacity (high hydrogen binding energy in its interior instead of interfaces, thereby high trap site density) and low-stress localization (spherical shape and small size). A noteworthy step forward from the previous works utilizing similar imaging and simulation methods is the new HE mitigation strategy proposed and verified here, through the modification of nanoscopic precipitates and transforming them into strong hydrogen traps. This strategy is expected to be effective in various high-strength aluminum alloys due to the wide availability of the T phase and can also inspire the development of HE-resistant alloys with similar switchable nanostructures. The present study also serves as an important supplement to the roles of the T phase in enhancing the mechanical properties of aluminum alloys, such as their great potential in increasing formability41,42 and irradiation resistance43 reported previously.
The aluminum alloy used in the present study has chemical compositions in mass % of 5.6Zn, 2.5Mg, 1.6Cu, and the balance Al. The specimen preparation procedures started with casting, homogenization (460 °C for 6 h and further increased to 465 °C for 24 h), hot rolling (400 °C, 87.5% thickness reduction), and thermal cycling (TC, 500 °C for 30 min, cooled in air, repeated eight times). Then the flat specimens with cross-sectional areas of approximately 0.6 mm × 0.6 mm were cut from the sheet plate by electrical discharge machining in water. After cutting, the specimens were electropolished in acid (5% HClO4 + 95% methanol, at a voltage of 12 V) for 30 s and subsequently put into a salt bath for solution treatment (ST, 470 °C for 1 h, quenched in ice water). The specimens were then divided into two groups, which were aged in oil at high temperature (150 °C) and low temperature (120 °C), respectively. After that, all the specimens were aged in humid air at 120 °C for 1 h in the same container, for the purpose of hydrogen charging. After hydrogen charging, the specimens were removed from the container, cleaned, dried, and kept in acetone until tensile tests or desorption tests. The specimen preparation procedures are illustrated in Supplementary Fig. 1.
Synchrotron X-ray microtomography (XMT) imaging
The projection-type XMT images were acquired at the BL20XU undulator beamline in SPring-8. A liquid nitrogen-cooled Si (111) double-crystal monochromator was utilized to produce a monochromatic X-ray beam with a beam energy of 20 keV. The image detector consisted of a digital CMOS camera (ORCA Flash 4.0: 2048 × 2048 pixel, Hamamatsu Photonics K.K.), a single-crystal scintillator (Ce: Lu2SiO5), and a lens (10×). A total of 1800 radiographs, scanning 180° with a 0.1-degree increment, were captured in each scan. The effective pixel size of the detector was 0.5 μm, and the sample-to-detector distance was 20 mm.
In situ tensile tests
Tensile tests were performed with a displacement rate of 0.02 mm/s using a miniature test rig (Deben UK Ltd). At each step, the displacement was held constant for 30 min before the next XMT scan. During such holding time, hydrogen redistribution occurs without significant hydrogen loss44. The displacement increment at each step was approximately 0.02 mm (corresponding to an applied strain of 2–3%) and the XMT images were obtained at all steps.
Microstructural characterization
Microstructures were observed using SEM (JSM-IT800) equipped with a dispersive X-ray spectrometry (EDS) detector and spherical aberration-corrected scanning transmission electron microscope (STEM, JEOL ARM-200F) combined with a HAADF detector. For grain size analysis, the flat specimens were etched with a 2.5% HNO3 + 1.5% HCl + 1.0% HF + 95% H2O solution for 15 s, washed with acetone, and observed under an optical microscope (DSX 500). APT analyses were performed on a CAMECA Instrument Inc. Local Electrode Atom Probe (LEAP) 5000 XR (reflectron fitted) in laser-pulsing mode (laser pulse energy of 40 pJ at a pulsing rate of 250 kHz), with the specimen at a base temperature of 50, and with five ions detected per 1000 pulses on average. APT reconstruction and analysis were performed using the CAMECA software AP Suite 6.1 and reconstructions were calibrated using the crystallographic features.
Thermal desorption tests
Thermal desorption analysis (TDA) was performed to measure the hydrogen concentration in the specimens after hydrogen charging. The tensile specimens were used for TDA analysis, during which the temperature of the specimen was raised from room temperature to a maximum temperature of 550 °C, at a heating rate of 90 °C/h.
The filtered back-projection algorithm was used to reconstruct the image slices from 1800 radiographs, from which the 16-bit images with a size of 2048 × 2048 px2 and a total number of 2048 were obtained. The 16-bit images were transformed into eight-bit images using an absorption coefficient range of −30–40 cm−1, which fits the eight-bit grayscale from 0 to 255. The coordinates, size, and shapes of Al2MgCu particles and voids were quantitatively analyzed using home-developed MATLAB-based programs, which segmented the images using the grayscale ranges of 200–255 and 0–100, respectively. Only features over 26 voxels in volume were analyzed to minimize the effect of noise. Following the quantitative analysis, the particle tracking was performed based on the microstructural feature tracking technique with the details described elsewhere45,46. In short, the same particles at different loading steps were precisely matched by comparing their gravity centers, volumes, and surface areas. This enables the generation of high-density 3D strain maps, by dividing the specimen interior into numerous tetrahedrons with the tracked Al2MgCu particles (with a total number of 55, 890) as vertices.
Hydrogen partitioning analysis
Hydrogen atoms between lattice sites and trap sites are in a thermal equilibrium state18:
$$\frac{{{{\uptheta }}}_{{{{{{\rm{t}}}}}}}}{1-{{{\uptheta }}}_{{{{{{\rm{t}}}}}}}}={{{\uptheta }}}_{{{{{{\rm{L}}}}}}}{{\exp }}\left(\frac{{{{{{{\rm{E}}}}}}}_{{{{{{\rm{b}}}}}}}}{{{{{{\rm{RT}}}}}}}\right)$$
where Eb is the trap binding energy, R is the universal gas constant, T is the temperature, θt is the occupancy at trap sites, and θL is the occupancy at lattice sites.
The hydrogen concentration and occupancy at all trap sites can be determined, given the total hydrogen concentration, trap densities, and binding energies of each trap site:
$${{{{{{\rm{C}}}}}}}_{{{{{{\rm{H}}}}}}}^{{{{{{\rm{T}}}}}}}={{{\uptheta }}}_{{{{{{\rm{L}}}}}}}{{{{{{\rm{N}}}}}}}_{{{{{{\rm{L}}}}}}}+\sum {{{\uptheta }}}_{{{{{{\rm{ti}}}}}}}{{{{{{\rm{N}}}}}}}_{{{{{{\rm{ti}}}}}}}+{{{{{{\rm{C}}}}}}}_{{{{{{\rm{pore}}}}}}}$$
where \({{{{{{\rm{C}}}}}}}_{{{{{{\rm{H}}}}}}}^{{{{{{\rm{T}}}}}}}\) is the total hydrogen content, NL is the trap site density at normal lattice sites, θti is the occupancy for the ith trap site, Nti is the trap site density for the ith trap site, Cpore is the hydrogen concentration at pores.
The binding energies for different trap sites can be determined by first-principles simulation: grain boundary, 0.2 eV47; screw dislocation, 0.11 eV48; edge dislocation, 0.18 eV kJ/mol48; vacancy, 0.3 eV49; pore, 0.7 eV50; the interior of Al2MgCu particles, 0.22 eV51; coherent MgZn2 interfaces, 0.08–0.35 eV52; semi-coherent MgZn2 interfaces, 0.56 eV; the interior of T phase, 0.56 eV (second-highest binding energy).
The trap densities of different trap sites (Nti) are determined as follows: Nt_GB = 7.3 × 1023 sites/m3 for GB, based on the grain size of 90 μm in the present equiaxed materials; the number density, size and shape of η and T precipitates were measured from high-resolution TEM images (Fig. 1 and Supplementary Fig. 2), giving an average diameter of 3.9 nm and thickness of 1.8 nm in LT specimen, an average diameter of 6.2 nm for both η and T and thickness of 2.1 nm for η in HT specimen, based on which the trap densities were determined as Nt_η = 7.7 × 1025 sites/m3 in LT specimen, Nt_η_coherent = 1.25 × 1026 sites/m3, Nt_η_semi = 1.63 × 1025 sites/m3 and Nt_T = 1.32 × 1026 sites/m3 in HT specimen; the values of Nti for dislocations and vacancies were determined based on the equivalent strains in the 3D strain maps; for Al2MgCu particles and pores, the Nti was calculated based on the 3D statistics (including coordinates, diameter, volume of each particle or pore) obtained from the geometric quantitative analysis of XMT images.
The analyzed regions were divided into numerous cubic cells with a size of 20 μm and it is assumed that the thermal equilibrium state is reached within these cells. On this basis, hydrogen partitioning calculations were performed for different regions of interest, including the undeformed regions (far from the crack tip) and the regions near the crack tip. At such a length scale, the nanosized precipitates can be assumed to be uniformly distributed, whereas the heterogenous distributions of other trap sites such as dislocations, micro-pores, and S phase particles lead to the variance in the calculated results in different unit cells.
First-principle calculation
First-principles calculations were conducted within the density functional theory framework using the Vienna ab initio simulation package53 with the Perdew–Burke–Ernzerhof generalized gradient approximation exchange-correlation density functional. The Brillouin-zone k-point samplings were chosen using the Monkhorst–Pack algorithm, where a 3 × 3 × 3 k-point was used in the calculation model. A cut-off in plane-wave energy of 360 eV was applied using a first-order Methfessel–Paxton scheme with a smearing parameter of 0.2 eV. The total energy was converged within 10−6 eV/atom for all calculations. The relaxed configurations were obtained using the conjugate gradient method that terminated the search when the force on all atoms was reduced to 0.01 eV/Å. The zero-point energy (ZPE) of hydrogen atoms was not considered for the total energy because no significant effect of ZPE correction was observed in the preliminary calculations, as shown in Supplementary Table 1. Atomic configurations were visualized using VESTA 3.4.4.
The crystallographic structure of the T phase with lattice parameters of a = b = c = 1.416 nm (bcc structure, \({{{{{\rm{Im}}}}}}\bar{3}{{{{{\rm{m}}}}}}\) space group) was used in the simulation. Two kinds of deep hydrogen trap sites were found within T precipitates: the first one has a maximum energy of 0.6 eV with a relatively low trap site density of 12 sites/unit cell; the other one exhibits a second-highest trap energy of 0.56 eV, with considerably higher trap density of 24 sites/unit cell than the previous one.
The experimental and computational data that support the findings of this study are available from the corresponding authors on request. The thermal desorption, tensile test, and SEM data used in this study are available in the database under accession code https://1drv.ms/u/s!AujHvlECIhSVxQphZnkljvBmGeix?e=AVksxq.
Liu, L. et al. Making ultrastrong steel tough by grain-boundary delamination. Science 368, 1347–1352 (2020).
Scully, J. R., Young, G. A., Smith SW. in Gaseous Hydrogen Embrittlement of Materials in Energy Technologies (eds Gangloff, R. P. & Somerday, B. P.) (Woodhead Publishing, 2012).
Song, J. & Curtin, W. Atomic mechanism and prediction of hydrogen embrittlement in iron. Nat. Mater. 12, 145–151 (2013).
Birnbaum, H. K. & Sofronis, P. Hydrogen-enhanced localized plasticity—a mechanism for hydrogen-related fracture. Mater. Sci. Eng. A 176, 191–202 (1994).
Lu, G. & Kaxiras, E. Hydrogen embrittlement of aluminum: the crucial role of vacancies. Phys. Rev. Lett. 94, 155501 (2005).
Hou, J., Kong, X.-S., Wu, X., Song, J. & Liu, C. Predictive model of hydrogen trapping and bubbling in nanovoids in bcc metals. Nat. Mater. 18, 833–839 (2019).
Van der Ven, A. & Ceder, G. The thermodynamics of decohesion. Acta Mater. 52, 1223–1235 (2004).
Hanson, J. P. et al. Crystallographic character of grain boundaries resistant to hydrogen-assisted fracture in Ni-base alloy 725. Nat. Commun. 9, 1–11 (2018).
Liang, Y. & Sofronis, P. Toward a phenomenological description of hydrogen-induced decohesion at particle/matrix interfaces. J. Mech. Phys. Solids 51, 1509–1531 (2003).
Article ADS CAS MATH Google Scholar
Zhang, Z., Moore, K. L., McMahon, G., Morana, R. & Preuss, M. On the role of precipitates in hydrogen trapping and hydrogen embrittlement of a nickel-based superalloy. Corros. Sci. 146, 58–69 (2019).
Tsuru, T. et al. Hydrogen-accelerated spontaneous microcracking in high-strength aluminium alloys. Sci. Rep. 10, 1–8 (2020).
Shimizu, K., Toda, H., Uesugi, K. & Takeuchi, A. Local deformation and fracture behavior of high-strength aluminum alloys under hydrogen influence. Metall. Mater. Trans. A 51, 1–19 (2020).
Takano, N. Hydrogen diffusion and embrittlement in 7075 aluminum alloy. Mater. Sci. Eng. A 483, 336–339 (2008).
Johnson, W. H. On some remarkable changes produced in iron and steel by the action of hydrogen and acids. Proc. Roy. Soc. Lond. 23, 168–179 (1875).
Daw, M. S. & Baskes, M. I. Semiempirical, quantum mechanical calculation of hydrogen embrittlement in metals. Phys. Rev. Lett. 50, 1285 (1983).
Gong, P., Nutter, J., Rivera-Diaz-Del-Castillo, P. & Rainforth, W. Hydrogen embrittlement through the formation of low-energy dislocation nanostructures in nanoprecipitation-strengthened steels. Sci. Adv. 6, eabb6152 (2020).
Barnoush, A. & Vehoff, H. Recent developments in the study of hydrogen embrittlement: Hydrogen effect on dislocation nucleation. Acta Mater. 58, 5274–5285 (2010).
Oriani, R. & Josephic, P. Equilibrium and kinetic studies of the hydrogen-assisted cracking of steel. Acta Met. 25, 979–988 (1977).
Shi, R. et al. Atomic-scale investigation of deep hydrogen trapping in NbC/α-Fe semi-coherent interfaces. Acta Mater. 200, 686–698 (2020).
Chen, Y.-S. et al. Observation of hydrogen trapping at dislocations, grain boundaries, and precipitates. Science 367, 171–175 (2020).
Lee, J. et al. Effects of vanadium carbides on hydrogen embrittlement of tempered martensitic steel. Met. Mater. Int. 22, 364–372 (2016).
Restrepo, S. E., Di Stefano, D., Mrovec, M. & Paxton, A. T. Density functional theory calculations of iron-vanadium carbide interfaces and the effect of hydrogen. Int. J. Hydrog. Energ. 45, 2382–2389 (2020).
Takahashi, J., Kawakami, K. & Kobayashi, Y. Origin of hydrogen trapping site in vanadium carbide precipitation strengthening steel. Acta Mater. 153, 193–204 (2018).
Su, H. et al. Assessment of hydrogen embrittlement via image-based techniques in Al–Zn–Mg–Cu aluminum alloys. Acta Mater. 176, 96–108 (2019).
Xu, Y. et al. Suppressed hydrogen embrittlement of high-strength Al alloys by Mn-rich intermetallic compound particles. Acta Mater. 236, 118110 (2022).
Singh, S. S., Guo, E., Xie, H. & Chawla, N. Mechanical properties of intermetallic inclusions in Al 7075 alloys by micropillar compression. Intermetallics 62, 69–75 (2015).
Tehranchi, A., Zhou, X. & Curtin, W. A. A decohesion pathway for hydrogen embrittlement in nickel: Mechanism and quantitative prediction. Acta Mater. 185, 98–109 (2020).
Zou, Y. et al. Co-precipitation of T′ and η′ phase in Al-Zn-Mg-Cu alloys. Mater. Charact. 169, 110610 (2020).
Sha, G. & Cerezo, A. Early-stage precipitation in Al–Zn–Mg–Cu alloy (7050). Acta Mater. 52, 4503–4516 (2004).
Wang, Y. et al. In-situ 3D observation of hydrogen-assisted particle damage behavior in 7075 Al alloy by synchrotron X-ray tomography. Acta Mater. 227, 117658 (2022).
Tromans, D. On surface energy and the hydrogen embrittlement of iron and steels. Acta Met. 42, 2043–2049 (1994).
Barrows, W., Dingreville, R. & Spearot, D. Traction–separation relationships for hydrogen induced grain boundary embrittlement in nickel via molecular dynamics simulations. Mater. Sci. Eng. A 650, 354–364 (2016).
Krom, A. H. M., Koers, R. W. J. & Bakker, A. Hydrogen transport near a blunting crack tip. J. Mech. Phys. Solids 47, 971–992 (1999).
Kacher, J. & Robertson, I. M. Quasi-four-dimensional analysis of dislocation interactions with grain boundaries in 304 stainless steel. Acta Mater. 60, 6657–6672 (2012).
Nagao, A., Dadfarnia, M., Somerday, B. P., Sofronis, P. & Ritchie, R. O. Hydrogen-enhanced-plasticity mediated decohesion for hydrogen-induced intergranular and "quasi-cleavage" fracture of lath martensitic steels. J. Mech. Phys. Solids 112, 403–430 (2018).
Yu, P., Cui, Y., Zhu, G.-Z., Shen, Y. & Wen, M. The key role played by dislocation core radius and energy in hydrogen interaction with dislocations. Acta Mater. 185, 518–527 (2020).
Robertson, I. M. et al. Hydrogen embrittlement understood. Metall. Mater. Trans. B 46, 1085–1103 (2015).
Martínez-Pañeda, E., del Busto, S., Niordson, C. F. & Betegón, C. Strain gradient plasticity modeling of hydrogen diffusion to the crack tip. Int. J. Hydrog. Energ. 41, 10265–10274 (2016).
Ferreira, P. J., Robertson, I. M. & Birnbaum, H. K. Hydrogen effects on the interaction between dislocations. Acta Mater. 46, 1749–1757 (1998).
Robertson, I. The effect of hydrogen on dislocation dynamics. Eng. Fract. Mech. 68, 671–692 (2001).
Stemper, L., Tunes, M. A., Tosone, R., Uggowitzer, P. J. & Pogatscher, S. On the potential of aluminum crossover alloys. Prog. Mater. Sci. 124, 100873 (2022).
Stemper, L. et al. Giant hardening response in AlMgZn(Cu) alloys. Acta Mater. 206, 116617 (2021).
Tunes, M. A., Stemper, L., Greaves, G., Uggowitzer, P. J. & Pogatscher, S. Prototypic lightweight alloy design for stellar-radiation environments. Adv. Sci. 7, 2002397 (2020).
Su, H. et al. Influence of hydrogen on strain localization and fracture behavior in AlZnMgCu aluminum alloys. Acta Mater. 159, 332–343 (2018).
Toda, H. X-Ray CT: Hardware and Software Techniques (Springer Nature, 2021).
Kobayashi, M. et al. High-density three-dimensional mapping of internal strain by tracking microstructural features. Acta Mater. 56, 2167–2181 (2008).
Yamaguchi, M. et al. First-principles calculation of multiple hydrogen segregation along aluminum grain boundaries. Comp. Mater. Sci. 156, 368–375 (2019).
Yamaguchi, M., Itakura, M., Tsuru, T. & Ebihara, K.-I. Hydrogen-trapping energy in screw and edge dislocations in aluminum: first-principles calculations. Mater. Trans. 62, 582–589 (2021).
Enomoto, T., Matsumoto, R., Taketomi, S. & Miyazaki, N. First-principles estimation of hydrogen occupancy around lattice defects in Al. Zair Soc. Mater. Sci. Jpn 59, 596–603 (2010).
Yamaguchi, M., Tsuru, T., Ebihara, K. & Itakura, M. Surface energy reduction by dissociative hydrogen adsorption on inner surface of pore in aluminum. J. Jpn. Inst. Light Met. 68, 588–595 (2018).
Yamaguchi, M. et al. Hydrogen trapping in Mg2Si and Al7FeCu2 intermetallic compounds in aluminum alloy: first-principles calculations. Mater. Trans. 61, 1907–1911 (2020).
Tsuru, T. et al. First-principles study of hydrogen segregation at the MgZn2 precipitate in Al-Mg-Zn alloys. Comp. Mater. Sci. 148, 301–306 (2018).
Kresse, G. & Hafner, J. Ab initio molecular dynamics for liquid metals. Phys. Rev. B. 47, 558 (1993).
We thank the Japan Synchrotron Radiation Research Institute for supporting the synchrotron radiation experiments at SPring-8 through proposal number 2020A1796/1084. We thank Ms. Chiharu Koga for the assistance in image acquisition at Spring-8, and Mr. Yuki Fukuda for performing the thermal desorption analysis. We thank Ms. Huihui Zhu at the University of Science and Technology Beijing for her contribution to the APT experiments. H Toda acknowledges the financial support from the Japan Science and Technology Agency through Core Research for Evolutional Science and Technology (CREST) project (grant JPMJCR-1995) and the Japan Society for the Promotion of Science (JSPS) through the KAKENH project (grant JP21H04624).
These authors contributed equally: Yafei Wang, Bhupendra Sharma.
Department of Mechanical Engineering, Kyushu University, Fukuoka, 819-0395, Japan
Yafei Wang, Bhupendra Sharma, Yuantao Xu, Hiro Fujihara & Hiroyuki Toda
School of Chemical Engineering and Technology, Xi'an Jiaotong University, Xi'an, 710049, China
Yafei Wang & Guangxu Cheng
Shanghai Key Laboratory of Materials Laser Processing and Modification, Shanghai Jiao Tong University, Shanghai, 200240, China
Yuantao Xu
Department of Physical Science and Materials Engineering, Iwate University, Iwate, 020-8551, Japan
Kazuyuki Shimizu
Department of Materials Science and Engineering, Kyoto University, Kyoto, 606-8501, Japan
Kyosuke Hirayama
Japan Synchrotron Radiation Research Institute, Hyogo, 679-5198, Japan
Akihisa Takeuchi & Masayuki Uesugi
Yafei Wang
Bhupendra Sharma
Hiro Fujihara
Akihisa Takeuchi
Masayuki Uesugi
Guangxu Cheng
Hiroyuki Toda
H.T. conceived the idea and designed the experiments. Y.W., B.S., and H.F. prepared the specimens and performed synchrotron-based in situ tensile tests. Y.X. performed the TEM observations. Y.W., B.S., and H.F. analyzed the CT images and obtained 3D strain distributions. Y.W. and K.S. performed hydrogen partitioning calculations. K.H., A.T., and M.U. assisted in the experiments at the synchrotron radiation facility in Spring-8. Y.W. prepared the original manuscript with input from all authors. H.T. revised the manuscript. All authors have given approval for the final version of the manuscript.
Correspondence to Yafei Wang, Bhupendra Sharma or Yuantao Xu.
Nature Communications thanks Matheus Araujo Tunes and the other, anonymous, reviewers for their contribution to the peer review of this work. Peer reviewer reports are available.
Wang, Y., Sharma, B., Xu, Y. et al. Switching nanoprecipitates to resist hydrogen embrittlement in high-strength aluminum alloys. Nat Commun 13, 6860 (2022). https://doi.org/10.1038/s41467-022-34628-4
|
CommonCrawl
|
Projection (measure theory)
In measure theory, projection maps often appear when working with product (Cartesian) spaces: The product sigma-algebra of measurable spaces is defined to be the finest such that the projection mappings will be measurable. Sometimes for some reasons product spaces are equipped with 𝜎-algebra different than the product 𝜎-algebra. In these cases the projections need not be measurable at all.
The projected set of a measurable set is called analytic set and need not be a measurable set. However, in some cases, either relatively to the product 𝜎-algebra or relatively to some other 𝜎-algebra, projected set of measurable set is indeed measurable.
Henri Lebesgue himself, one of the founders of measure theory, was mistaken about that fact. In a paper from 1905 he wrote that the projection of Borel set in the plane onto the real line is again a Borel set.[1] The mathematician Mikhail Yakovlevich Suslin found that error about ten years later, and his following research has led to descriptive set theory.[2] The fundamental mistake of Lebesgue was to think that projection commutes with decreasing intersection, while there are simple counterexamples to that.[3]
Basic examples
For an example of a non-measurable set with measurable projections, consider the space $X:=\{0,1\}$ with the 𝜎-algebra ${\mathcal {F}}:=\{\varnothing ,\{0\},\{1\},\{0,1\}\}$ and the space $Y:=\{0,1\}$ with the 𝜎-algebra ${\mathcal {G}}:=\{\varnothing ,\{0,1\}\}.$ The diagonal set $\{(0,0),(1,1)\}\subseteq X\times Y$ is not measurable relatively to ${\mathcal {F}}\otimes {\mathcal {G}},$ although the both projections are measurable sets.
The common example for a non-measurable set which is a projection of a measurable set, is in Lebesgue 𝜎-algebra. Let ${\mathcal {L}}$ be Lebesgue 𝜎-algebra of $\mathbb {R} $ and let ${\mathcal {L}}'$ be the Lebesgue 𝜎-algebra of $\mathbb {R} ^{2}.$ For any bounded $N\subseteq \mathbb {R} $ not in ${\mathcal {L}}.$ the set $N\times \{0\}$ is in ${\mathcal {L}}',$ since Lebesgue measure is complete and the product set is contained in a set of measure zero.
Still one can see that ${\mathcal {L}}'$ is not the product 𝜎-algebra ${\mathcal {L}}\otimes {\mathcal {L}}$ but its completion. As for such example in product 𝜎-algebra, one can take the space $\{0,1\}^{\mathbb {R} }$ (or any product along a set with cardinality greater than continuum) with the product 𝜎-algebra ${\mathcal {F}}=\textstyle {\bigotimes \limits _{t\in \mathbb {R} }}{\mathcal {F}}_{t}$ where ${\mathcal {F}}_{t}=\{\varnothing ,\{0\},\{1\},\{0,1\}\}$ for every $t\in \mathbb {R} .$ In fact, in this case "most" of the projected sets are not measurable, since the cardinality of ${\mathcal {F}}$ is $\aleph _{0}\cdot 2^{\aleph _{0}}=2^{\aleph _{0}},$ whereas the cardinality of the projected sets is $2^{2^{\aleph _{0}}}.$ There are also examples of Borel sets in the plane which their projection to the real line is not a Borel set, as Suslin showed.[2]
Measurable projection theorem
The following theorem gives a sufficient condition for the projection of measurable sets to be measurable.
Let $(X,{\mathcal {F}})$ be a measurable space and let $(Y,{\mathcal {B}})$ be a polish space where ${\mathcal {B}}$ is its Borel 𝜎-algebra. Then for every set in the product 𝜎-algebra ${\mathcal {F}}\otimes {\mathcal {B}},$ the projected set onto $X$ is a universally measurable set relatively to ${\mathcal {F}}.$[4]
An important special case of this theorem is that the projection of any Borel set of $\mathbb {R} ^{n}$ onto $\mathbb {R} ^{n-k}$ where $k<n$ is Lebesgue-measurable, even though it is not necessarily a Borel set. In addition, it means that the former example of non-Lebesgue-measurable set of $\mathbb {R} $ which is a projection of some measurable set of $\mathbb {R} ^{2},$ is the only sort of such example.
See also
• Analytic set – subset of a Polish space that is the continuous image of a Polish spacePages displaying wikidata descriptions as a fallback
• Descriptive set theory – Subfield of mathematical logic
References
1. Lebesgue, H. (1905) Sur les fonctions représentables analytiquement. Journal de Mathématiques Pures et Appliquées. Vol. 1, 139–216.
2. Moschovakis, Yiannis N. (1980). Descriptive Set Theory. North Holland. p. 2. ISBN 0-444-70199-0.
3. Lowther, George (8 November 2016). "Measurable Projection and the Debut Theorem". Almost Sure. Retrieved 21 March 2018.
• Crauel, Hans (2003). Random Probability Measures on Polish Spaces. STOCHASTICS MONOGRAPHS. London: CRC Press. p. 13. ISBN 0415273870.
External links
• "Measurable projection theorem", PlanetMath
Measure theory
Basic concepts
• Absolute continuity of measures
• Lebesgue integration
• Lp spaces
• Measure
• Measure space
• Probability space
• Measurable space/function
Sets
• Almost everywhere
• Atom
• Baire set
• Borel set
• equivalence relation
• Borel space
• Carathéodory's criterion
• Cylindrical σ-algebra
• Cylinder set
• 𝜆-system
• Essential range
• infimum/supremum
• Locally measurable
• π-system
• σ-algebra
• Non-measurable set
• Vitali set
• Null set
• Support
• Transverse measure
• Universally measurable
Types of Measures
• Atomic
• Baire
• Banach
• Besov
• Borel
• Brown
• Complex
• Complete
• Content
• (Logarithmically) Convex
• Decomposable
• Discrete
• Equivalent
• Finite
• Inner
• (Quasi-) Invariant
• Locally finite
• Maximising
• Metric outer
• Outer
• Perfect
• Pre-measure
• (Sub-) Probability
• Projection-valued
• Radon
• Random
• Regular
• Borel regular
• Inner regular
• Outer regular
• Saturated
• Set function
• σ-finite
• s-finite
• Signed
• Singular
• Spectral
• Strictly positive
• Tight
• Vector
Particular measures
• Counting
• Dirac
• Euler
• Gaussian
• Haar
• Harmonic
• Hausdorff
• Intensity
• Lebesgue
• Infinite-dimensional
• Logarithmic
• Product
• Projections
• Pushforward
• Spherical measure
• Tangent
• Trivial
• Young
Maps
• Measurable function
• Bochner
• Strongly
• Weakly
• Convergence: almost everywhere
• of measures
• in measure
• of random variables
• in distribution
• in probability
• Cylinder set measure
• Random: compact set
• element
• measure
• process
• variable
• vector
• Projection-valued measure
Main results
• Carathéodory's extension theorem
• Convergence theorems
• Dominated
• Monotone
• Vitali
• Decomposition theorems
• Hahn
• Jordan
• Maharam's
• Egorov's
• Fatou's lemma
• Fubini's
• Fubini–Tonelli
• Hölder's inequality
• Minkowski inequality
• Radon–Nikodym
• Riesz–Markov–Kakutani representation theorem
Other results
• Disintegration theorem
• Lifting theory
• Lebesgue's density theorem
• Lebesgue differentiation theorem
• Sard's theorem
For Lebesgue measure
• Isoperimetric inequality
• Brunn–Minkowski theorem
• Milman's reverse
• Minkowski–Steiner formula
• Prékopa–Leindler inequality
• Vitale's random Brunn–Minkowski inequality
Applications & related
• Convex analysis
• Descriptive set theory
• Probability theory
• Real analysis
• Spectral theory
|
Wikipedia
|
Adomian decomposition method
The Adomian decomposition method (ADM) is a semi-analytical method for solving ordinary and partial nonlinear differential equations. The method was developed from the 1970s to the 1990s by George Adomian, chair of the Center for Applied Mathematics at the University of Georgia.[1] It is further extensible to stochastic systems by using the Ito integral.[2] The aim of this method is towards a unified theory for the solution of partial differential equations (PDE); an aim which has been superseded by the more general theory of the homotopy analysis method.[3] The crucial aspect of the method is employment of the "Adomian polynomials" which allow for solution convergence of the nonlinear portion of the equation, without simply linearizing the system. These polynomials mathematically generalize to a Maclaurin series about an arbitrary external parameter; which gives the solution method more flexibility than direct Taylor series expansion.[4]
Ordinary differential equations
Adomian method is well suited to solve Cauchy problems, an important class of problems which include initial conditions problems.
Application to a first order nonlinear system
An example of initial condition problem for an ordinary differential equation is the following:
$y^{\prime }(t)+y^{2}(t)=-1,$
$y(0)=0.$
To solve the problem, the highest degree differential operator (written here as L) is put on the left side, in the following way:
$Ly=-1-y^{2},$
with L = d/dt and $L^{-1}=\int _{0}^{t}()$. Now the solution is assumed to be an infinite series of contributions:
$y=y_{0}+y_{1}+y_{2}+y_{3}+\cdots .$
Replacing in the previous expression, we obtain:
$(y_{0}+y_{1}+y_{2}+y_{3}+\cdots )=y(0)+L^{-1}[-1-(y_{0}+y_{1}+y_{2}+y_{3}+\cdots )^{2}].$
Now we identify y0 with some explicit expression on the right, and yi, i = 1, 2, 3, ..., with some expression on the right containing terms of lower order than i. For instance:
${\begin{aligned}&y_{0}&=&\ y(0)+L^{-1}(-1)&=&-t\\&y_{1}&=&-L^{-1}(y_{0}^{2})=-L^{-1}(t^{2})&=&-t^{3}/3\\&y_{2}&=&-L^{-1}(2y_{0}y_{1})&=&-2t^{5}/15\\&y_{3}&=&-L^{-1}(y_{1}^{2}+2y_{0}y_{2})&=&-17t^{7}/315.\end{aligned}}$
In this way, any contribution can be explicitly calculated at any order. If we settle for the four first terms, the approximant is the following:
${\begin{aligned}y&=y_{0}+y_{1}+y_{2}+y_{3}+\cdots \\&=-\left[t+{\frac {1}{3}}t^{3}+{\frac {2}{15}}t^{5}+{\frac {17}{315}}t^{7}+\cdots \right]\end{aligned}}$
Application to Blasius equation
A second example, with more complex boundary conditions is the Blasius equation for a flow in a boundary layer:
${\frac {\mathrm {d} ^{3}u}{\mathrm {d} x^{3}}}+{\frac {1}{2}}u{\frac {\mathrm {d} ^{2}u}{\mathrm {d} x^{2}}}=0$
With the following conditions at the boundaries:
${\begin{aligned}u(0)&=0\\u^{\prime }(0)&=0\\u^{\prime }(x)&\to 1,\qquad x\to \infty \end{aligned}}$
Linear and non-linear operators are now called $L={\frac {\mathrm {d} ^{3}}{\mathrm {d} x^{3}}}$ and $N={\frac {1}{2}}u{\frac {\mathrm {d} ^{2}}{\mathrm {d} x^{2}}}$, respectively. Then, the expression becomes:
$Lu+Nu=0$
and the solution may be expressed, in this case, in the following simple way:
$u=\alpha +\beta x+\gamma x^{2}/2-L^{-1}Nu$
where: $L^{-1}\xi (x)=\int dx\int \mathrm {d} x\int \mathrm {d} x\;\;\xi (x)$ If:
${\begin{aligned}u&=u^{0}+u^{1}+u^{2}+\cdots +u^{N}\\&=\alpha +\beta x+\gamma x^{2}/2-{\frac {1}{2}}L^{-1}(u^{0}+u^{1}+u^{2}+\cdots +u^{N}){\frac {\mathrm {d} ^{2}}{\mathrm {d} x^{2}}}(u^{0}+u^{1}+u^{2}+\cdots +u^{N})\end{aligned}}$
and:
${\begin{aligned}u^{0}&={}\alpha +\beta x+\gamma x^{2}/2\\u^{1}&=-{\frac {1}{2}}L^{-1}(u^{0}u^{0''})&=&-L^{-1}A_{0}\\u^{2}&=-{\frac {1}{2}}L^{-1}(u^{1}u^{0''}+u^{0}u^{1''})&=&-L^{-1}A_{1}\\u^{3}&=-{\frac {1}{2}}L^{-1}(u^{2}u^{0''}+u^{1}u^{1''}+u^{0}u^{2''})&=&-L^{-1}A_{2}\\&\cdots \end{aligned}}$
Adomian’s polynomials to linearize the non-linear term can be obtained systematically by using the following rule:
$A_{n}={\frac {1}{n!}}{\frac {\mathrm {d} ^{n}}{\mathrm {d} \lambda ^{n}}}f(u(\lambda ))\mid _{\lambda =0},$
where: ${\frac {\mathrm {d} ^{n}}{\mathrm {d} \lambda ^{n}}}u(\lambda )\mid _{\lambda =0}=n!u_{n}$
Boundary conditions must be applied, in general, at the end of each approximation. In this case, the integration constants must be grouped into three final independent constants. However, in our example, the three constants appear grouped from the beginning in the form shown in the formal solution above. After applying the two first boundary conditions we obtain the so-called Blasius series:
$u={\frac {\gamma }{2}}x^{2}-{\frac {\gamma ^{2}}{2}}\left({\frac {x^{5}}{5!}}\right)+{\frac {11\gamma ^{3}}{4}}\left({\frac {x^{8}}{8!}}\right)-{\frac {375\gamma ^{4}}{8}}\left({\frac {x^{11}}{11!}}\right)+\cdots $
To obtain γ we have to apply boundary conditions at ∞, which may be done by writing the series as a Padé approximant:
$f(z)=\sum _{n=0}^{L+M}c_{n}z^{n}={\frac {a_{0}+a_{1}z+\cdots +a_{L}z^{L}}{b_{0}+b_{1}z+\cdots +b_{M}z^{M}}}$
where L = M. The limit at $\infty $ of this expression is aL/bM.
If we choose b0 = 1, M linear equations for the b coefficients are obtained:
$\left[{\begin{array}{cccc}c_{L-M+1}&c_{L-M+2}&\cdots &c_{L}\\c_{L-M+2}&c_{L-M+3}&\cdots &c_{L+1}\\\vdots &\vdots &&\vdots \\c_{L}&c_{L+1}&\cdots &c_{L+M-1}\end{array}}\right]\left[{\begin{array}{c}b_{M}\\b_{M-1}\\\vdots \\b_{1}\end{array}}\right]=-\left[{\begin{array}{c}c_{L+1}\\c_{L+2}\\\vdots \\c_{L+M}\end{array}}\right]$
Then, we obtain the a coefficients by means of the following sequence:
${\begin{aligned}a_{0}&=c_{0}\\a_{1}&=c_{1}+b_{1}c_{0}\\a_{2}&=c_{2}+b_{1}c_{1}+b_{2}c_{0}\\&\cdots \\a_{L}&=c_{L}+\sum _{i=1}^{\min(L,m)}b_{i}c_{L-i}.\end{aligned}}$
In our example:
$u'(x)=\gamma x-{\frac {\gamma ^{2}}{2}}\left({\frac {x^{4}}{4!}}\right)+{\frac {11\gamma ^{3}}{4}}\left({\frac {x^{7}}{7!}}\right)-{\frac {375\gamma ^{4}}{8}}\left({\frac {x^{10}}{10!}}\right)$
Which when γ = 0.0408 becomes:
$u'(x)={\frac {0.0204+0.0379\,z-0.0059\,z^{2}-0.00004575\,z^{3}+6.357\cdot 10^{-6}z^{4}-1.291\cdot 10^{-6}z^{5}}{1-0.1429\,z-0.0000232\,z^{2}+0.0008375\,z^{3}-0.0001558\,z^{4}-1.2849\cdot 10^{-6}z^{5}}},$
with the limit:
$\lim _{x\to \infty }u'(x)=1.004.$
Which is approximately equal to 1 (from boundary condition (3)) with an accuracy of 4/1000.
Partial differential equations
Application to a rectangular system with nonlinearity
One of the most frequent problems in physical sciences is to obtain the solution of a (linear or nonlinear) partial differential equation which satisfies a set of functional values on a rectangular boundary. An example is the following problem:
${\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}-b{\frac {\partial u^{2}}{\partial x}}=\rho (x,y)\qquad (1)$
with the following boundary conditions defined on a rectangle:
$u(x=0)=f_{1}(y)\quad {\text{and}}\quad u(x=x_{l})=f_{2}(y)\qquad {\text{(1-a)}}$
$u(y=-y_{l})=g_{1}(x)\quad {\text{and}}\quad u(y=y_{l})=g_{2}(x)\qquad {\text{(1-b)}}$
This kind of partial differential equation appears frequently coupled with others in science and engineering. For instance, in the incompressible fluid flow problem, the Navier–Stokes equations must be solved in parallel with a Poisson equation for the pressure.
Decomposition of the system
Let us use the following notation for the problem (1):
$L_{x}u+L_{y}u+Nu=\rho (x,y)\qquad (2)$
where Lx, Ly are double derivate operators and N is a non-linear operator.
The formal solution of (2) is:
$u=a(y)+b(y)x+L_{x}^{-1}\rho (x,y)-L_{x}^{-1}L_{y}u-L_{x}^{-1}Nu\qquad (3)$
Expanding now u as a set of contributions to the solution we have:
$u=u_{0}+u_{1}+u_{2}+u_{3}+\cdots $
By substitution in (3) and making a one-to-one correspondence between the contributions on the left side and the terms on the right side we obtain the following iterative scheme:
${\begin{aligned}u_{0}&=a_{0}(y)+b_{0}(y)x+L_{x}^{-1}\rho (x,y)\\u_{1}&=a_{1}(y)+b_{1}(y)x-L_{x}^{-1}L_{y}u_{0}+b\int dxA_{0}\\&\cdots \\u_{n}&=a_{n}(y)+b_{n}(y)x-L_{x}^{-1}L_{y}u_{n-1}+b\int dxA_{n-1}\quad 0<n<\infty \end{aligned}}$
where the couple {an(y), bn(y)} is the solution of the following system of equations:
${\begin{aligned}\varphi ^{n}(x=0)&=f_{1}(y)\\\varphi ^{n}(x=x_{l})&=f_{2}(y),\end{aligned}}$
here $\varphi ^{n}\equiv \sum _{i=0}^{n}u_{i}$ is the nth-order approximant to the solution and N u has been consistently expanded in Adomian polynomials:
${\begin{aligned}Nu&=-b\partial _{x}u^{2}=-b\partial _{x}(u_{0}+u_{1}+u_{2}+u_{3}+\cdots )(u_{0}+u_{1}+u_{2}+u_{3}+\cdots )\\&=-b\partial _{x}(u_{0}u_{0}+2u_{0}u_{1}+u_{1}u_{1}+2u_{0}u_{2}+\cdots )\\&=-b\partial _{x}\sum _{n=1}^{\infty }A(n-1),\end{aligned}}$
where $A_{n}=\sum _{\nu =1}^{n}C(\nu ,n)f^{(\nu )}(u_{0})$ and f(u) = u2 in the example (1).
Here C(ν, n) are products (or sum of products) of ν components of u whose subscripts sum up to n, divided by the factorial of the number of repeated subscripts. It is only a thumb-rule to order systematically the decomposition to be sure that all the combinations appearing are utilized sooner or later.
The $\sum _{n=0}^{\infty }A_{n}$ is equal to the sum of a generalized Taylor series about u0.[1]
For the example (1) the Adomian polynomials are:
${\begin{aligned}A_{0}&=u_{0}^{2}\\A_{1}&=2u_{0}u_{1}\\A_{2}&=u_{1}^{2}+2u_{0}u_{2}\\A_{3}&=2u_{1}u_{2}+2u_{0}u_{3}\\&\cdots \end{aligned}}$
Other possible choices are also possible for the expression of An.
Series solutions
Cherruault established that the series terms obtained by Adomian's method approach zero as 1/(mn)! if m is the order of the highest linear differential operator and that $\lim _{n\to \infty }\varphi ^{n}=u$.[5] With this method the solution can be found by systematically integrating along any of the two directions: in the x-direction we would use expression (3); in the alternative y-direction we would use the following expression:
$u=c(x)+d(x)y+L_{y}^{-1}\rho (x,y)-L_{y}^{-1}L_{x}u-L_{y}^{-1}Nu$
where: c(x), d(x) is obtained from the boundary conditions at y = - yl and y = yl:
${\begin{aligned}u(y=-y_{l})&=g_{1}(x)\\u(y=y_{l})&=g_{2}(x)\end{aligned}}$
If we call the two respective solutions x-partial solution and y-partial solution, one of the most interesting consequences of the method is that the x-partial solution uses only the two boundary conditions (1-a) and the y-partial solution uses only the conditions (1-b).
Thus, one of the two sets of boundary functions {f1, f2} or {g1, g2} is redundant, and this implies that a partial differential equation with boundary conditions on a rectangle cannot have arbitrary boundary conditions on the borders, since the conditions at x = x1, x = x2 must be consistent with those imposed at y = y1 and y = y2.
An example to clarify this point is the solution of the Poisson problem with the following boundary conditions:
${\begin{aligned}u(x=0)&=f_{1}(y)=0\\u(x=x_{l})&=f_{2}(y)=0\end{aligned}}$
By using Adomian's method and a symbolic processor (such as Mathematica or Maple) it is easy to obtain the third order approximant to the solution. This approximant has an error lower than 5×10−16 in any point, as it can be proved by substitution in the initial problem and by displaying the absolute value of the residual obtained as a function of (x, y).[6]
The solution at y = -0.25 and y = 0.25 is given by specific functions that in this case are:
$g_{1}(x)=0.0520833\,x-0.347222\,x^{3}+9.25186\times 10^{-17}x^{4}+0.833333\,x^{5}-0.555556\,x^{6}$
and g2(x) = g1(x) respectively.
If a (double) integration is now performed in the y-direction using these two boundary functions the same solution will be obtained, which satisfy u(x=0, y) = 0 and u(x=0.5, y) = 0 and cannot satisfy any other condition on these borders.
Some people are surprised by these results; it seems strange that not all initial-boundary conditions must be explicitly used to solve a differential system. However, it is a well established fact that any elliptic equation has one and only one solution for any functional conditions in the four sides of a rectangle provided there is no discontinuity on the edges. The cause of the misconception is that scientists and engineers normally think in a boundary condition in terms of weak convergence in a Hilbert space (the distance to the boundary function is small enough to practical purposes). In contrast, Cauchy problems impose a point-to-point convergence to a given boundary function and to all its derivatives (and this is a quite strong condition!). For the first ones, a function satisfies a boundary condition when the area (or another functional distance) between it and the true function imposed in the boundary is so small as desired; for the second ones, however, the function must tend to the true function imposed in any and every point of the interval.
The commented Poisson problem does not have a solution for any functional boundary conditions f1, f2, g1, g2; however, given f1, f2 it is always possible to find boundary functions g1*, g2* so close to g1, g2 as desired (in the weak convergence meaning) for which the problem has solution. This property makes it possible to solve Poisson's and many other problems with arbitrary boundary conditions but never for analytic functions exactly specified on the boundaries. The reader can convince himself (herself) of the high sensitivity of PDE solutions to small changes in the boundary conditions by solving this problem integrating along the x-direction, with boundary functions slightly different even though visually not distinguishable. For instance, the solution with the boundary conditions:
$f_{1,2}(y)=0.00413682-0.0813801\,y^{2}+0.260416\,y^{4}-0.277778\,y^{6}$
at x = 0 and x = 0.5, and the solution with the boundary conditions:
${\begin{aligned}f_{1,2}(y)=0.00413683&-0.00040048\,y-0.0813802\,y^{2}+0.0101279\,y^{3}+0.260417\,y^{4}\\&-0.0694455\,y^{5}-0.277778\,y^{6}+0.15873\,y^{7}+\cdots \end{aligned}}$
at x = 0 and x = 0.5, produce lateral functions with different sign convexity even though both functions are visually not distinguishable.
Solutions of elliptic problems and other partial differential equations are highly sensitive to small changes in the boundary function imposed when only two sides are used. And this sensitivity is not easily compatible with models that are supposed to represent real systems, which are described by means of measurements containing experimental errors and are normally expressed as initial-boundary value problems in a Hilbert space.
Improvements to the decomposition method
At least three methods have been reported [6] [7] [8] to obtain the boundary functions g1*, g2* that are compatible with any lateral set of conditions {f1, f2} imposed. This makes it possible to find the analytical solution of any PDE boundary problem on a closed rectangle with the required accuracy, so allowing to solve a wide range of problems that the standard Adomian's method was not able to address.
The first one perturbs the two boundary functions imposed at x = 0 and x = x1 (condition 1-a) with a Nth-order polynomial in y: p1, p2 in such a way that: f1' = f1 + p1, f2' = f2 + p2, where the norm of the two perturbation functions are smaller than the accuracy needed at the boundaries. These p1, p2 depend on a set of polynomial coefficients ci, i = 1, ..., N. Then, the Adomian method is applied and functions are obtained at the four boundaries which depend on the set of ci, i = 1, ..., N. Finally, a boundary function F(c1, c2, ..., cN) is defined as the sum of these four functions, and the distance between F(c1, c2, ..., cN) and the real boundary functions ((1-a) and (1-b)) is minimized. The problem has been reduced, in this way, to the global minimization of the function F(c1, c2, ..., cN) which has a global minimum for some combination of the parameters ci, i = 1, ..., N. This minimum may be found by means of a genetic algorithm or by using some other optimization method, as the one proposed by Cherruault (1999).[9]
A second method to obtain analytic approximants of initial-boundary problems is to combine Adomian decomposition with spectral methods.[7]
Finally, the third method proposed by García-Olivares is based on imposing analytic solutions at the four boundaries, but modifying the original differential operator in such a way that it is different from the original one only in a narrow region close to the boundaries, and it forces the solution to satisfy exactly analytic conditions at the four boundaries.[8]
Integral Equations
The Adomian decomposition method may also be applied to linear and nonlinear integral equations to obtain solutions.[10] This corresponds to the fact that many differential equation can be converted into integral equations.[10]
Adomian Decomposition Method
The Adomian decomposition method for nonhomogenous Fredholm integral equation of the second kind goes as follows:[10]
Given an integral equation of the form:
$u(x)=f(x)+\lambda \int _{a}^{b}K(x,t)u(t)dt$
We assume we may express the solution in series form:
$u(x)=\sum _{n=0}^{\infty }u_{n}(x)$
Plugging the series form into the integral equation then yields:
$\sum _{n=0}^{\infty }u_{n}(x)=f(x)+\lambda \int _{a}^{b}K(x,t)(\sum _{n=0}^{\infty }u_{n}(t))dt$
Assuming that the sum converges absolutely to $u(x)$ we may integerchange the sum and integral as follows:
$\sum _{n=0}^{\infty }u_{n}(x)=f(x)+\lambda \int _{a}^{b}\sum _{n=0}^{\infty }K(x,t)u_{n}(t)dt$
$\sum _{n=0}^{\infty }u_{n}(x)=f(x)+\lambda \sum _{n=0}^{\infty }\int _{a}^{b}K(x,t)u_{n}(t)dt$
Expanding the sum on both sides yields:
$u_{0}(x)+u_{1}(x)+u_{2}(x)+...=f(x)+\lambda \int _{a}^{b}K(x,t)u_{0}(t)dt+\lambda \int _{a}^{b}K(x,t)u_{1}(t)dt+\lambda \int _{a}^{b}K(x,t)u_{2}(t)dt+...$
Hence we may associate each $u_{i}(x)$ in the following recurrent manner:
$u_{0}(x)=f(x)$
$u_{i}(x)=\lambda \int _{a}^{b}K(x,t)u_{i-1}dt,\,\,\,\,\,\,\,\,i\geq 1$
which gives us the solution $u(x)$ in the solution form above.
Example
Given the Fredholm integral equation:
$u(x)=\cos(x)+2x+\int _{0}^{\pi }xt\cdot u(t)dt$
Since $f(x)=\cos(x)+2x$, we can set:
$u_{0}(x)=\cos(x)+2x$
$u_{1}(x)=\int _{0}^{\pi }xt\cdot u_{0}dt=\int _{0}^{\pi }xt\cdot (cosx+2x)dt=(-2+{\frac {2\pi ^{3}}{3}})x$
$u_{2}(x)=\int _{0}^{\pi }xt\cdot u_{1}dt=\int _{0}^{\pi }xt\cdot (-2+{\frac {2\pi ^{3}}{3}})t\,dt=({\frac {-2\pi ^{3}}{3}}+{\frac {2\pi ^{6}}{9}})x$
...
Hence the solution $u(x)$ may be written as:
$u(x)=\cos(x)+2x+(-2+{\frac {2\pi ^{3}}{3}})x+({\frac {-2\pi ^{3}}{3}}+{\frac {2\pi ^{6}}{9}})x+...$
Since this is a telescoping series, we can see that every terms after $cos(x)$ cancels and may be regarded as "noise",[10] Thus, $u(x)$ becomes:
$u(x)=\cos(x)$
Gallery
See also
• Order of approximation
References
1. Adomian, G. (1994). Solving Frontier problems of Physics: The decomposition method. Kluwer Academic Publishers.
2. Adomian, G. (1986). Nonlinear Stochastic Operator Equations. Kluwer Academic Publishers. ISBN 978-0-12-044375-8.
3. Liao, S.J. (2012), Homotopy Analysis Method in Nonlinear Differential Equation, Berlin & Beijing: Springer & Higher Education Press, ISBN 978-3642251313
4. Wazwaz, Abdul-Majid (2009). Partial Differential Equations and Solitary Waves Theory. Higher Education Press. p. 15. ISBN 978-90-5809-369-1.
5. Cherruault, Y. (1989), "Convergence of Adomian's Method", Kybernetes, 18 (2): 31–38, doi:10.1108/eb005812
6. García-Olivares, A. (2003), "Analytic solution of partial differential equations with Adomian's decomposition", Kybernetes, 32 (3): 354–368, doi:10.1108/03684920310458584
7. García-Olivares, A. (2002), "Analytical approximants of time-dependent partial differential equations with tau methods", Mathematics and Computers in Simulation, 61: 35–45, doi:10.1016/s0378-4754(02)00133-7, hdl:10261/51182
8. García-Olivares, A. (2003), "Analytical solution of nonlinear partial differential equations of physics", Kybernetes, 32 (4): 548–560, doi:10.1108/03684920310463939, hdl:10261/51176 [DOI: 10.1108/03684920310463939]
9. Cherruault, Y. (1999). Optimization, Méthodes locales et globales. Presses Universitaires de France. ISBN 978-2-13-049910-7.
10. Wazwaz, Abdul-majid (2015). First Course In Integral Equations, A. World Scientific Publishing Company. ISBN 978-981-4675-16-1. OCLC 1020691303.
|
Wikipedia
|
Infinitary logic
An infinitary logic is a logic that allows infinitely long statements and/or infinitely long proofs.[1] They were introduced by Zermelo in the 1930s.[2]
Some infinitary logics may have different properties from those of standard first-order logic. In particular, infinitary logics may fail to be compact or complete. Notions of compactness and completeness that are equivalent in finitary logic sometimes are not so in infinitary logics. Therefore for infinitary logics, notions of strong compactness and strong completeness are defined. This article addresses Hilbert-type infinitary logics, as these have been extensively studied and constitute the most straightforward extensions of finitary logic. These are not, however, the only infinitary logics that have been formulated or studied.
Considering whether a certain infinitary logic named Ω-logic is complete promises[3] to throw light on the continuum hypothesis.
A word on notation and the axiom of choice
As a language with infinitely long formulae is being presented, it is not possible to write such formulae down explicitly. To get around this problem a number of notational conveniences, which, strictly speaking, are not part of the formal language, are used. $\cdots $ is used to point out an expression that is infinitely long. Where it is unclear, the length of the sequence is noted afterwards. Where this notation becomes ambiguous or confusing, suffixes such as $\bigvee _{\gamma <\delta }{A_{\gamma }}$ are used to indicate an infinite disjunction over a set of formulae of cardinality $\delta $. The same notation may be applied to quantifiers, for example $\forall _{\gamma <\delta }{V_{\gamma }:}$. This is meant to represent an infinite sequence of quantifiers: a quantifier for each $V_{\gamma }$ where $\gamma <\delta $.
All usage of suffixes and $\cdots $ are not part of formal infinitary languages.
The axiom of choice is assumed (as is often done when discussing infinitary logic) as this is necessary to have sensible distributivity laws.
Definition of Hilbert-type infinitary logics
A first-order infinitary language Lα,β, α regular, β = 0 or ω ≤ β ≤ α, has the same set of symbols as a finitary logic and may use all the rules for formation of formulae of a finitary logic together with some additional ones:
• Given a set of formulae $A=\{A_{\gamma }|\gamma <\delta <\alpha \}$ then $(A_{0}\lor A_{1}\lor \cdots )$ and $(A_{0}\land A_{1}\land \cdots )$ are formulae. (In each case the sequence has length $\delta $.)
• Given a set of variables $V=\{V_{\gamma }|\gamma <\delta <\beta \}$ and a formula $A_{0}$ then $\forall V_{0}:\forall V_{1}\cdots (A_{0})$ and $\exists V_{0}:\exists V_{1}\cdots (A_{0})$ are formulae. (In each case the sequence of quantifiers has length $\delta $.)
The concepts of free and bound variables apply in the same manner to infinite formulae. Just as in finitary logic, a formula all of whose variables are bound is referred to as a sentence.
A theory T in infinitary language $L_{\alpha ,\beta }$ is a set of sentences in the logic. A proof in infinitary logic from a theory T is a (possibly infinite) sequence of statements that obeys the following conditions: Each statement is either a logical axiom, an element of T, or is deduced from previous statements using a rule of inference. As before, all rules of inference in finitary logic can be used, together with an additional one:
• Given a set of statements $A=\{A_{\gamma }|\gamma <\delta <\alpha \}$ that have occurred previously in the proof then the statement $\land _{\gamma <\delta }{A_{\gamma }}$ can be inferred.[4]
The logical axiom schemata specific to infinitary logic are presented below. Global schemata variables: $\delta $ and $\gamma $ such that $0<\delta <\alpha $.
• $((\land _{\epsilon <\delta }{(A_{\delta }\implies A_{\epsilon })})\implies (A_{\delta }\implies \land _{\epsilon <\delta }{A_{\epsilon }}))$
• For each $\gamma <\delta $, $((\land _{\epsilon <\delta }{A_{\epsilon }})\implies A_{\gamma })$
• Chang's distributivity laws (for each $\gamma $): $(\lor _{\mu <\gamma }{(\land _{\delta <\gamma }{A_{\mu ,\delta }})})$, where $\forall \mu \forall \delta \exists \epsilon <\gamma :A_{\mu ,\delta }=A_{\epsilon }$ or $A_{\mu ,\delta }=\neg A_{\epsilon }$, and $\forall g\in \gamma ^{\gamma }\exists \epsilon <\gamma :\{A_{\epsilon },\neg A_{\epsilon }\}\subseteq \{A_{\mu ,g(\mu )}:\mu <\gamma \}$ :\{A_{\epsilon },\neg A_{\epsilon }\}\subseteq \{A_{\mu ,g(\mu )}:\mu <\gamma \}}
• For $\gamma <\alpha $, $((\land _{\mu <\gamma }{(\lor _{\delta <\gamma }{A_{\mu ,\delta }})})\implies (\lor _{\epsilon <\gamma ^{\gamma }}{(\land _{\mu <\gamma }{A_{\mu ,\gamma _{\epsilon }(\mu )})}}))$, where $\{\gamma _{\epsilon }:\epsilon <\gamma ^{\gamma }\}$ is a well ordering of $\gamma ^{\gamma }$
The last two axiom schemata require the axiom of choice because certain sets must be well orderable. The last axiom schema is strictly speaking unnecessary, as Chang's distributivity laws imply it,[5] however it is included as a natural way to allow natural weakenings to the logic.
Completeness, compactness, and strong completeness
A theory is any set of sentences. The truth of statements in models are defined by recursion and will agree with the definition for finitary logic where both are defined. Given a theory T a sentence is said to be valid for the theory T if it is true in all models of T.
A logic in the language $L_{\alpha ,\beta }$ is complete if for every sentence S valid in every model there exists a proof of S. It is strongly complete if for any theory T for every sentence S valid in T there is a proof of S from T. An infinitary logic can be complete without being strongly complete.
A cardinal $\kappa \neq \omega $ is weakly compact when for every theory T in $L_{\kappa ,\kappa }$ containing at most $\kappa $ many formulas, if every S $\subseteq $ T of cardinality less than $\kappa $ has a model, then T has a model. A cardinal $\kappa \neq \omega $ is strongly compact when for every theory T in $L_{\kappa ,\kappa }$, without restriction on size, if every S $\subseteq $ T of cardinality less than $\kappa $ has a model, then T has a model.
Concepts expressible in infinitary logic
In the language of set theory the following statement expresses foundation:
$\forall _{\gamma <\omega }{V_{\gamma }:}\neg \land _{\gamma <\omega }{V_{\gamma +}\in V_{\gamma }}.\,$
Unlike the axiom of foundation, this statement admits no non-standard interpretations. The concept of well-foundedness can only be expressed in a logic that allows infinitely many quantifiers in an individual statement. As a consequence many theories, including Peano arithmetic, which cannot be properly axiomatised in finitary logic, can be in a suitable infinitary logic. Other examples include the theories of non-archimedean fields and torsion-free groups.[6] These three theories can be defined without the use of infinite quantification; only infinite junctions[7] are needed.
Complete infinitary logics
Two infinitary logics stand out in their completeness. These are the logics of $L_{\omega ,\omega }$ and $L_{\omega _{1},\omega }$. The former is standard finitary first-order logic and the latter is an infinitary logic that only allows statements of countable size.
The logic of $L_{\omega ,\omega }$ is also strongly complete, compact and strongly compact.
The logic of $L_{\omega _{1},\omega }$ fails to be compact, but it is complete (under the axioms given above). Moreover, it satisfies a variant of the Craig interpolation property.
If the logic of $L_{\alpha ,\alpha }$ is strongly complete (under the axioms given above) then $\alpha $ is strongly compact (because proofs in these logics cannot use $\alpha $ or more of the given axioms).
References
1. Moore, Gregory (1997). "The Prehistory of Infinitary Logic: 1885–1955". Structures and Norms in Science. pp. 105–123. doi:10.1007/978-94-017-0538-7_7. ISBN 978-90-481-4787-8.
2. A. Kanamori, "Zermelo and Set Theory". pp.529--543. Bulletin of Symbolic Logic vol. 10, no. 4 (2004). Accessed 22 August 2023.
3. Woodin, W. Hugh (2009). "The Continuum Hypothesis, the generic-multiverse of sets, and the Ω Conjecture" (PDF). Harvard University Logic Colloquium.
4. Karp, Carol (1964). "Chapter 5 Infinitary Propositional Logic". Languages with Expressions of Infinite Length. pp. 39–54. doi:10.1016/S0049-237X(08)70423-3. ISBN 9780444534019. {{cite book}}: |journal= ignored (help)
5. Chang, Chen-Chung (1955). "Algebra and Theory of Numbers" (PDF). Bulletin of the American Mathematical Society. 61: 325–326.
6. Rosinger, Elemer (2010). "Four Departures in Mathematics and Physics". arXiv:1003.0360. CiteSeerX 10.1.1.760.6726. {{cite journal}}: Cite journal requires |journal= (help)
7. Bennett, David (1980). "Junctions". Notre Dame Journal of Formal Logic. XXI (1): 111–118. doi:10.1305/ndjfl/1093882943.
• Karp, Carol R. (1964), Languages with expressions of infinite length, Amsterdam: North-Holland Publishing Co., MR 0176910
• Barwise, Kenneth Jon (1969), "Infinitary logic and admissible sets", Journal of Symbolic Logic, 34 (2): 226–252, doi:10.2307/2271099, JSTOR 2271099, MR 0406760, S2CID 38740720
|
Wikipedia
|
\begin{document}
\today
\vskip 1cm \begin{center}{\sc Homoclinic bifurcation in Morse-Novikov theory,
a doubling phenomenon}
\vskip 1cm
{\sc Fran\c cois Laudenbach \& Carlos Moraga Ferr\'andiz}
\end{center} \title{} \author{} \address{Universit\'e de Nantes, LMJL, UMR 6629 du CNRS, 44322 Nantes, France} \email{[email protected]}
\address{36, av. Camille Gu\'erin, 44000 Nantes, France} \email{[email protected]}
\keywords{Closed one-form, Morse-Novikov theory, gradient, Kupka-Smale, homoclinic bifurcation}
\subjclass[2010]{57R99, 37B35, 37D15}
\thanks{Both authors were supported by ERC Geodycon; the second author was also supported by the Japan Society for the Promotion of Science, under the ``FY2013 Postdoctoral Fellowship (short-term) for North American and European Researchers'' program.}
\begin{abstract} We consider a compact manifold of dimension greater than 2 and a differential form of degree one which is closed but non-exact. This form, viewed as a multi-valued function has a gradient vector field with respect to any Riemannian metric. After S. Novikov's work and a complement by J.-C. Sikorav, under some genericity assumptions these data yield a complex, called today the Morse-Novikov complex. Due to the non-exactness of the form, its gradient has a non-trivial dynamics in contrary to gradients of functions. In particular, it is possible that the gradient has a homoclinic orbit. The one-form being fixed, we investigate the codimension-one stratum in the space of gradients which is formed by gradients having one simple homoclinic orbit. Such a stratum S breaks up into a left and a right part separated by a substratum. The algebraic effect on the Morse-Novikov complex of crossing S depends on the part, left or right, which is crossed. The sudden creation of infinitely many new heteroclinic orbits may happen. Moreover, some gradients with a simple homoclinic orbit are approached by gradients with a simple homoclinic orbit of double energy. These two phenomena are linked.
\end{abstract}
\maketitle \thispagestyle{empty} \vskip 1cm \section{Introduction}
\begin{rien}{\sc Morse-Novikov Theory setup.} {\rm We are given a closed connected $n$-dimensional manifold $M$ equipped with a closed differential form of degree one
$\alpha$ of type Morse, meaning that its zeroes are non-degenerate.
In other words, the local primitives
$f_{loc}$ of this 1-form are Morse functions. Morse-Novikov Theory deals with the case where $\alpha$ is non-exact, that is
the cohomology class $u$ of $\alpha$ is non-zero in $H^1(M;\mathbb{R})$. The set of zeroes of $\alpha$ will be denoted $Z(\alpha)$;
each zero $p\in Z(\alpha)$ has a Morse index $i(p)\in \{0, \ldots, n\}$. The set of zeroes of index $k$ is noted $Z_k(\alpha)$.
Since the undeterminacy of $f_{loc}$ is just an additive constant, for any Riemannian metric the {\it descending} or
{\it negative}
gradient $-\nabla f_{loc}$ is globally defined.
Such a vector field $X$ will be said an $\alpha${\it -gradient}.
The zeroes of $X$ coincide with the zeroes of $\alpha$ and are hyperbolic. Therefore, each zero $p$ of $\alpha$
has a stable manifold $W^s(p,X)$ and an unstable manifold $W^u(p,X)$. Both are manifolds which are injectively
immersed\footnote{When $\alpha$ is exact (case of Morse Theory), the stable and unstable maniolds are embedded.}
in $M$; the unstable (resp. stable) manifold is diffeomorphic to $\mathbb{R}^{i(p)}$ (resp. $\mathbb{R}^{n-i(p)}$). In the present article,
we will point out some dynamical particularities of gradients of {\it multivalued Morse functions}, according to the
terminology introduced by S. Novikov for speaking of Morse closed 1-forms \cite{novikov}.
By Kupka-Smale's theorem \cite{peixoto, palis},
generically among the $\alpha$-gradients
the invariant manifolds $W^u(p, X)$ and $W^s(q,X)$ are mutually transverse for every $p,q\in Z(\alpha)$. Note that this
property is not open in general.
A vector field whose zeroes are hyperbolic and which fulfils this transversality property
will be named a {\it Kupka-Smale} vector field in what follows though the classical definition is more restrictive;\footnote{ In the literature, a vector field is said to be Kupka-Smale
if the periodic orbits are all hyperbolic and their center-stable and
center-unstable manifolds are mutually transverse.} for brevity, we shall speak of $KS$ vector fields.
In that case, if an orbit of $X$ is a {\it connecting orbit} going from $p$ to $q$ then, as in Morse Theory, we have \begin{equation} i(p)>i(q). \end{equation} In particular, as $p\neq q$ such an orbit is heteroclinic. } \end{rien}
\begin{rien}{\sc A key fact.} \label{deep} {\rm Let us define the $\alpha$-{\it length} of an $\alpha$-gradient orbit $\ell$ by \begin{equation}\label{eq:LengthCO} \mathcal{L}(\ell):= -\int_\ell \alpha. \end{equation} This positive number is nothing but the Riemannian \emph{energy} of $\ell$. It may be infinite which is the case for almost every orbit when the local primitive has no critical points of extremal Morse index. Here is a {\sc key fact} which makes Morse-Novikov gradient dynamics very special; its proof, indeed elementary, will be given in Appendix A (also \cite[Prop. 2.8]{latour}).
\begin{itemize} \item[]{\it Assume $X$ is $KS$. Let $p\in Z_k(\alpha)$ and $q\in Z_{k-1}(\alpha)$. Then, for every $L>0$, the number of connecting orbits from $p$ to $q$ whose $\alpha$-length is bounded by $L$ is finite.}
\end{itemize}
\noindent This fact might be the basic observation when Novikov discovered his famous {\it Novikov ring}. Anyhow, it gives a way for reading an algebraic ``counting'' of the gradient orbits which connect two zeroes of $\alpha$
when the difference of their Morse indices is exactly one. We are going to sketch how this counting is made. This will be relevant for dynamical information.
} \end{rien}
\begin{rien}{\sc Introduction to the Morse-Novikov complex.}{\rm
In our paper, the Morse-Novikov complex is just a tool for \emph{encoding} a part of the dynamics of $\alpha$-gradients: namely, the count of orbits connecting two zeroes whose Morse indices differ by one. We adopt a point of view which is due to J.-C. Sikorav \cite{sikorav}. For him, the \emph{universal} Novikov ring $\Lambda_u$ is some \emph{completion} of the group ring $\mathbb{Z}[\pi_1(M,*)]$ of the fundamental group
associated with the cohomology class $[\alpha]=u$
(see Novikov Condition (\ref{eq:NovikovCond})).
The Morse-Novikov complex is only defined when the $\alpha$-gradient $X$ is Kupka-Smale. As a graded module, $NC_*(\alpha, X)$ in degree $k$ is the free module generated by $Z_k(\alpha)$ over the ring $\Lambda_u$. Each unstable manifold $W^u (p,X)$ is oriented arbitrarily. The differential $\partial^X: NC_k(\alpha, X)\to NC_{k-1}(\alpha, X)$ has the following form on a generator $p\in Z_k(\alpha)$: \begin{equation} \partial^Xp= \sum_{q\in Z_{k-1}(\alpha)}\left( \sum_{\gamma\in \Gamma_p^q}n_\gamma(p,q) \gamma\right) q \end{equation} Here, $n_\gamma(p,q)$ is a relative integer, $\Gamma_p^q$ denotes the set of homotopy classes of paths from $p$ to $q$. As the unstable manifolds are oriented, and hence, the stable manifolds are co-oriented, each connecting orbit from $p$ to $q$ carries a sign (once some conventions are fixed). The integer $n_\gamma(p,q)$ is the total number of the signs carried by the connecting orbits in the homotopy class $\gamma$. If $n_\gamma(p,q)\neq 0$, there is at least one connecting orbit from $p$ to $q$ in the class $\gamma$. One checks that $\partial^X\circ \partial^X=0$ which justifies the term ``complex'' in the algebraic sense.
By Stokes' formula, all connecting orbits in a given homotopy class $\gamma$ have the same $\alpha$-length. Thus, by the {\sc key fact}, $n_\gamma(p,q)$ is finite. Moreover,
by definition of the Novikov ring $\Lambda_u$
(see \cite{farber} or (\ref{eq:NovikovCond})), the {\it incidence coefficient} \begin{equation}\label{incidence} n(p,q):= \sum_{\gamma\in \Gamma_p^q}n_\gamma(p,q) \gamma \end{equation} is an element of $\Lambda_u$, and hence, $\partial ^X$ is a morphism of graded $\Lambda_u$-rings of degree -1. Note the following: If $n(p,q)$ is an infinite series then there is a sequence $(\gamma_j)$ of homotopy classes in $\Gamma_p^q$
whose $\alpha$-lengths go to $+\infty$ and such that each $\gamma_j$ contains at least one connecting orbit.} \end{rien}
\begin{rien}{\sc Homoclinic bifurcation.}\label{list} {\rm
The complement of the set of $KS$ gradients is stratified thanks to a measure of the default to be a Kupka-Smale vector field. Here, we list the strata\footnote{ At the beginning, the stratification in question is just a collection of disjoint subsets of the space of considered gradients. Theorem \ref{thm1} gives some of them the status of genuine {\it strata}. }
of ``codimension one'': \begin{itemize} \item[1)] There is a unique pair $(p,q)$ with $i(p)=i(q)+1$ and a unique $X$-orbit from $p$ to $q$ along which $W^u(p, X)$
shows a minimal transversality defect to $W^s(q,X)$. Crossing this stratum consists of creating or cancelling a pair of orbits of opposite sign. Such a pair does not appear in the counting above mentioned. So, we neglect the strata of this type. \item[2)] There is a unique pair $(p,p')$, with $i(p)= i(p')$ and $p\neq p'$, and a unique connecting orbit from $p$ to $p'$. Crossing such a stratum consists of making a {\it handle slide} in the sense of Morse Theory.
This is out of the scope of our paper. \item[3)] There is a unique homoclinic orbit from $p$ to itself which is {\it simple}\footnote{ 'Simple' in the multiplicity sense---see Subsection (\ref{ssec:tube}).} and {\it non-broken}\footnote{ A broken orbit is the concatenation of orbits, one being linked to the next one by some zero.}. We are only interested in the study of this bifurcation which we call {\it homoclinic bifurcation}. This was also considered
by { M. Hutchings}
\cite{hutchings} (see also \cite{moraga}). \end{itemize}
If the gradient $X$ belongs to the stratum $\mathcal{S}$ under consideration in 3), the unique homoclinic orbit $\ell$ of $X$ forms a loop with the zero $p\in Z(\alpha)$ as a base point. As previously said, the $\alpha$-length is positive and by Stokes' formula, the loop $\ell$ is not homotopic to zero. Let $g$ denote its homotopy class in $\pi_1(M,p)$ and let denote by $\mathcal{S}_g\subset \mathcal{S}$ the stratum made of the $\alpha$-gradients whose homoclinic orbit belongs
to the homotopy class $g$. Actually, $\mathcal{S}$ is the disjoint union $\{\mathop{\sqcup}\limits_g\mathcal{S}_g\mid g\in \pi_1(M,p)\}$.
From now on, we are going to make some restrictions on the underlying Riemannian metric: we impose the metric to make the $\alpha$-gradient {\it adapted}, meaning that
it is linearizable at every zero $p\in Z(\alpha)$ with a spectrum in $\{-1,1\}$ (See Definition \ref{df:Adapted}). Up to some rescaling, this
spectrum condition means that in linearizing coordinates about $p$ the $\alpha$-gradients are \emph{radial} on both the local stable and unstable manifolds.
Let $\mathcal{F}_\alpha$ denote the space of adapted $\alpha$-gradients.
This constraint on the specrum (not at all generic) is more informative than a metric yielding a {\it non-resonant} spectrum.\footnote{ This brings us back to a principle that R. Thom strongly defended in the seventies: a non-generic object is richer than one of its generic approximations since it contains the information of its {\it universal unfolding}.\label{universal}} Our constraint, for which Kupka-Smale's Theorem still applies (among the adapted $\alpha$-gradients with given germs), {\it enriches} the holonomy of $\ell$
so much that there is a well-defined real function $\chi:\mathcal{S}_g\to\mathbb{R}$ which depends only on the linearized holonomy
of $\ell$ from $p$ to itself and is continuous. This function will be constructed in Section \ref{sect:Sg})
and will plays the main r\^ole in our paper.
We name $\chi$ the {\it character function}. We have the following statement.
\begin{thm}\label{thm1}${}$
\begin{enumerate}
\item
The stratum $\mathcal{S}_g$ is a
codimension-one, co-oriented submanifold of $\mathcal{F}_\alpha$ of class $C^\infty$.
\item Assume $n>2$ and $\mathcal{S}_g\neq\emptyset$.
Then, the vanishing locus $\mathcal{S}^0_g$ of $\chi$ is a non-empty
co-oriented codimension-one submanifold of class $C^\infty$ in $\mathcal{S}_g$ meeting
each of its connected components.
\end{enumerate}
\end{thm}
\begin{figure}
\caption{Local situation in $\mathcal{F}_\alpha$ of every connected component of $\mathcal{S}_g$ near $\mathcal{S}_g^0$. The arrows indicate the co-orientation of $\mathcal{S}_g$ in $\mathcal{F}_\alpha$ and $\mathcal{S}_g^0$ in $\mathcal{S}_g$.}
\label{fig:Sg0dsSg}
\end{figure} Set $\mathcal{S}_g^+: =\{X\in \mathcal{S}_g\mid \chi(X)>0\}$ and $\mathcal{S}_g^-: =\{X\in \mathcal{S}_g\mid \chi(X)<0\}$. We shall see that crossing $\mathcal{S}_g$ positively through $\mathcal{S}_g^+ $ or $\mathcal{S}_g^- $ changes the Morse-Novikov complex in a completely different manner. } \end{rien} \begin{rien}{\sc Bifurcation by crossing $\mathcal{S}_g$.} {\rm
Since $ \mathcal{S}_g$ is co-oriented, we can study the generic
one-parameter families $
(X_s)_{s\in {\mathcal{O} p(0)}}$ which intersect $\mathcal{S}_g$ \emph{positively} at $X_0$; here, we use Gromov's notation: ${\mathcal{O} p(0)}$
stands for an open interval which contains 0 and whose size is chosen as small as desired.\footnote{More generally, if
$A$ is a closed subset of $B$, $\mathcal Op(A)$ stands for an open
neighborhood of $A$ in $B$ which is not specified.}
In other words, the path $(X_s)_{s\in {\mathcal{O} p(0)}}$ has to be thought as a germ of path at $X_0$.
We recall $p$, the zero of $\alpha$ which is involved in the homoclinic orbit of $X_0$. Let $q\in Z(\alpha)$ be any zero whose Morse index satisfies $i(q)=i(p)-1$.
The next theorem requires some genericity assumption, namely the property for $X\in \mathcal{S}_g$ to be
{\it almost Kupka-Smale} (Definition \ref{almost}). The corresponding residual set is noted $\mathcal{S}_{g,\infty}$. Let $X_0\in \mathcal{S}_{g,\infty}$. Take any $L>0$. By Proposition \ref{g-residual}, for every $s\neq 0$ close enough to 0 the vector field $X_s$ is {\it Kupka-Smale up to $L$}, that is, the algebraic number of
connecting orbits from $p$ to $q$ with $\alpha$-length smaller than $L$ is finite and locally constant. Denote it by $n(p,q)_L^-$ or $n(p,q)_L^+$ depending on the sign of $s$; these numbers are called {\it truncated incidence coefficients}.
Theorem \ref{thm:selfslideSimplif} states how these truncated incidence coefficients change through crossing $\mathcal{S}_g$.\footnote{ This makes more precise the statement of \cite[Prop. 2.2.36]{moragaTesis} whose proof was inaccurate.} More explanation of the formulas will be given just after the statement.
\begin{thm}\label{thm:selfslideSimplif}${}$
Let $(X_s)_{s\in {\mathcal{O} p(0)}}$ be a path crossing $\mathcal{S}_g$ positively at time $s=0$. If $X_0\in \mathcal{S}_{g,\infty}$
the following holds for every $L>0$: \begin{enumerate} \item when $X_0$ belongs to $\mathcal{S}_g^-$, then $n(p,q)_L^+ =(1+g)\cdot n(p,q)_L^-$\quad{\rm (mod. $L$)}, \item when $X_0$ belongs to $\mathcal{S}_g^+$, then $n(p,q)_L^+ =(1+g+ g^2+ \cdots)\cdot n(p,q)_L^-$\quad{\rm (mod. $L$)}. \end{enumerate} \end{thm}
Of course, in order to keep the squared differential equal to zero there are similar formulas for the change of the incidence $n(q',p)_L^\pm$ when the Morse indices satisfy $i(q')=i(p)+1$. More precisely, we have: {\it \begin{itemize} \item[(3)] when $X_0\in \mathcal{S}_g^-$, then $n(q',p)_L^+= n(q',p)_L^-\cdot (1-g+g^2- g^3+ \cdots)$\quad{\rm (mod. $L$)}, \item[(4)] when $X_0\in \mathcal{S}_g^+$, then $n(q',p)_L^+= n(q',p)_L^-\cdot (1-g)$\quad{\rm (mod. $L$)}. \end{itemize}}
The explanation for the product in Formulas (1) and (2) goes as follows (and similarly for (3) and (4)). On the one hand, recall Formula (\ref{incidence}); namely,
$n(p,q)= \sum_\gamma n_\gamma(p,q)\gamma$ where $\gamma\in \Gamma_p^q$ is a homotopy class of paths from $p$ to $q$ and $n_\gamma(p,q)$ is a relative integer which gives the total algebraic number of connecting orbits in that class. On the other hand $g$ is a homotopy class of loops based at $p$. So, the concatenation $g^j\cdot \gamma$ makes sense and yields a new element
in $\Gamma_p^q$. This product is distributive with respect to the sum.\\
} \end{rien} \begin{remarque}
We have examples described in \cite{l-m} where the homoclinic bifurcation is isolated. In that case, truncation by $L$ becomes useless.
There are a pair $p, q$ of zeroes with $i(p)=i(q)+1$ and a one-parameter family of adapted $\alpha$-gradients $\left(X_s\right)_{s\in [-\varepsilon,+\varepsilon]}$ crossing $\mathcal{S}_g$ positively such that: \begin{itemize} \item[-] for every $s<0$ there is a unique heteroclinic orbit $\ell $ from $p$ to $q$; \item[-] for every $s>0$ there are infinitely many heteroclinic orbits from $p$ to $q$. \end{itemize} This infinity which appears at once
reads as follows: one heteroclinic orbit in each homotopy class $g^j\cdot [\ell]$, for $j= 0, 1, 2, ...$; here, [-] stands for the homotopy class.\\ \end{remarque}
\begin{rien}{\sc The doubling phenomenon.} {\rm
This relates the strata $\mathcal{S}_g$ and $\mathcal{S}_{g^2}$. Look for instance at the above-mentioned example and consider a small generic loop in $\mathcal{F}_\alpha$ going around the codimension-two stratum $\mathcal{S}_g^0$, beginning by crossing $\mathcal{S}_g^-$ positively and returning by crossing $\mathcal{S}_g^+$ negatively. Then the cumulative factor after one turn
is $(1-g^2)$
while it should be equal to 1; this is a contradiction. The next theorem solves this contradiction.
We emphasize that its statement does not need the presence of any other zero of $\alpha$ than the base point
of the homoclinic orbit.
\begin{thm}\label{thm:Doubling} Again assume $n>2$ and $\mathcal{S}_g\neq \emptyset$.
Then, there exists a codimension-two stratum $\mathcal{S}_g^{0,0}$ in $\mathcal{S}_g$, contained in $\mathcal{S}_g^0$, such that $\mathcal{S}_g^0\smallsetminus \mathcal{S}_g^{0,0}$ adheres to $\mathcal{S}_{g^2}$ as a boundary\footnote{This does not contradict the fact that by their very definition $\mathcal{S}_g$ and $\mathcal{S}_{g^2}$ are disjoint.} of class $C^1$,
more precisely of its positive part $\mathcal{S}_{g^2}^+$.
\end{thm}
In dynamical language, if $X$ is an adapted $\alpha$-gradient in $\mathcal{S}_g^0$ (that is, $\chi(X)= 0$) whose homoclinic orbit is $\ell$, then $X$ can be approximated by an adapted $\alpha$-gradient $X'$ having a unique homoclinic orbit $\ell'$ turning twice around $\ell$. In particular, $[\ell']= [\ell]^2$ in $\pi_1(M,p)$.
The precise definition of $\mathcal{S}_g^{0,0}$ yields a decomposition $\mathcal{S}_g^0\smallsetminus\mathcal{S}_g^{0,0}=\mathcal{S}_g^{0,-}\,\sqcup\,\mathcal{S}_g^{0,+}$ (see Definition \ref{decompositionS^0}). Locally along $\mathcal{S}_g^0\smallsetminus\mathcal{S}_g^{0,0}$, the stratum $\mathcal{S}_{g^2}$ approaches from one side of
$\mathcal{S}_g$ only. As a matter of fact, $\mathcal{S}_{g^2}$ approaches $\mathcal{S}_g^{0,+}$ (resp. $\mathcal{S}_g^{0,-}$)
from the positive (resp. negative side) of the co-oriented stratum $\mathcal{S}_g$.
A precise statement about the latter facts is given in Theorem \ref{thm:DoublingRefined} and
may be illustrated by Figure \ref{figure2}. \begin{figure}
\caption{The stratum $\mathcal{S}_g^0\smallsetminus\mathcal{S}_g^{0,0}$ as a boundary of $\mathcal{S}_{g^2}$ which is grayed.}
\label{fig:Sg0Detail}
\label{figure2}
\end{figure} } \end{rien}
\begin{remarques}${}$
\noindent 1) Our doubling phenomenon evokes the period doubling bifurcation, also called Andronov-Hopf's bifurcation. In this aim, it would be good to know that crossing $\mathcal{S}_g$ creates (or destroys) a periodic orbit in the free homotopy class $g$. Shilnikov's theorem \cite{shilnikov} deals with this question. Unfortunately, it is not applicable here because we are going to use very non-generic Morse charts (only $+1$ and $-1$ as eigenvalues of the Hessian at critical points). In counterpart,
such charts offer very nice advantages.\\
\noindent 2) We were asked the question whether our results depend on the assumption that the
$\alpha$-gradient $X$ is {\it adapted}
in the sense of Definition \ref{df:Adapted}. Most probably,
if this assumption is not fulfilled, there is no way to define something which gives so much information as
the {\it character function}. In this case, all of the three above-stated
theorems disappear.\footnote{ This confirms Thom's principle which we have mentioned in Footnote \ref{universal}.}\\
\noindent 3) We were also suggested to find a more general setting for our results. For instance, one could fix
a Riemannian metric and look
at vector fields with given hyperbolic zeroes. In this setting, the key fact \ref{deep} still holds true if the
$\alpha$-length is replaced with the Riemannian energy. Unfortunately, there is no natural stratification
of the complement of Kupka-Smale vector fields. Namely, there is no ``grading'' of the homoclinic orbits
based at a zero of the considered vector fied. For instance, one faces, in general, a defect of equicontinuity of sequences
of the homoclinic orbits in a given homotopy class. Therefore, we do not see any natural generalization of our study.\\
\noindent {\bf Acknowledgement.} The first author is grateful to Yulij Ilyashenko for fruitful discussions
on the occasion of the Conference ``Topological methods in dynamics and related topics'', Nizhny Novgorod, 2019;
and to Slava Grines who invited him. We also thank Dirk Schuetz and Claude Viterbo for pertinent comments.
\end{remarques}
\section{\sc Homoclinic bifurcation, orientation and character}\label{sect:Sg}
We are focusing on homoclinic bifurcations even though some of the statements
hold true for other bifurcations (see list in Subsection \ref{list}). We consider an $\alpha$-gradient $X$
with a simple homoclinic
orbit $\ell$ based at some zero $p\in Z(\alpha)$ in the homotopy class $g\in \pi_1(M,p)$.
In this section, we are going to show that if the Morse coordinates about $p$ are {\it simple} in the sense
of Definition \ref{df:Adapted} then the Morse model $\mathcal{M}_p$ in these coordinates
allows us to enrich the holonomy of $\ell$ with some specific information.
From this latter we deduce the {\it character function} which is the key new tool of the paper. Finally, we prove
Theorem \ref{thm1}.
\begin{defn}\label{df:Adapted}${}$
\noindent {\rm 1)} For each zero $p$ of $\alpha$ of index $i(p)=i$,
\emph{simple Morse coordinates} about $p$ are coordinates where the form $\alpha$ is equal to the differential of the standard quadratic form $$Q_i:= \frac 12\left[-x_1^2+\ldots-x_i^2+x_{i+1}^2+\ldots +x_n^2\right]. $$
\noindent {\rm 2)} An $\alpha$-gradient $X$ is said to be \emph{adapted} if
for every $p\in Z(\alpha)$ there are simple Morse coordinates about $p$ such that
$X$ coincides with the \emph{standard} descending gradient $X_{i}$
of $Q_i$, where $i=i(p)$, that is:
$$X_{i}=\sum_1^i x_k\partial_{x_k}-\sum_{i+1}^n x_k\partial_{x_k}.$$
Such Morse coordinates are also said to be \emph{adapted} to $X$.
\end{defn}
The property for $X$ to be adapted depends only on the germ of $X$ near $Z(\alpha)$.
We recall that for simplicity we fix the germ of adapted $\alpha$-gradients once and for all at every zero of $\alpha$;
the set of such adapted $\alpha$-gradients noted $\mathcal{F}_\alpha$.
\begin{remarque}\label{alexander}
The simplicial group $\mathcal{G}:=\text{Diff}(Q_i)$ of germs of diffeomorphisms of
$(\mathbb{R}^n,0)$ preserving $Q_i$ retracts by deformation to $O(i,n-i)$, the linear group of isometries of $Q_i$.
Indeed, if
$\varphi\in \mathcal{G}$, the Alexander isotopy $\varphi_t: x\mapsto \frac 1t\varphi(tx)$ is made of elements in $\mathcal{G}$
for every $t\in (0,1]$ and tends to the derivative $\varphi'(0)x$ as $t $ goes to 0.
Moreover, $O(i,n-i)$ retracts by deformation to its maximal compact subgroup $O(i)\times O(n-i)$
which is the isometry group of the pair $(Q_i,X_i)$.
As a consequence, the space of germs
of adapted $\alpha$-gradients is made of a unique element up to the action of ${\rm Diff}(M\,\text{rel}\, Z(\alpha))$. For this reason, with
no loss of generallity, we may fix the germ at $p$ of all the considered adapted $\alpha$-gradients in what follows
for every $p\in Z(\alpha)$. This choice will be done for all bifurcation families in Sections \ref{section3} and \ref{section4}.
\end{remarque}
\subsection{\sc Morse model.} \label{ssec:morse}
Given $p\in Z(\alpha)$ of Morse index $i$,
a {\it Morse model} $\mathcal{M}_p\subset M$ with positive parameters $(\delta,\delta^*)$ (which we do not
make explicit in the notation)
is diffeomorphic to the subset of
$\mathbb{R}^i\times\mathbb{R}^{n-i}$ made of pairs $(x^-,x^+)$ such that $Q_i(x^-,x^+)\in [-\delta^*,+\delta^*]$,
$\vert x^-\vert^2\vert x^+\vert^2\leq \delta\delta^*$ and $\alpha\vert_{\mathcal{M}_p}= dQ_i$. The bottom of $\mathcal{M}_p$, that is its intersection with $\{Q_i=-\delta^*\}$
is denoted by $\partial^-\mathcal{M}_p$; { similarly,
the top is denoted by} $\partial^+\mathcal{M}_p$.
The rest of the boundary of $\mathcal{M}_p$ is
denoted by $\partial^\ell \mathcal{M}_p$ and $X_i$ is tangent to it. Note that:
\begin{itemize}
\item the group $G$ preserves $\mathcal{M}_p$ for every parameters $(\delta,\delta^*)$;
\item the set of Morse models, as compact subsets of $\mathbb{R}^n$, is contractible.
\end{itemize}
The flow of $X_i$ is denoted by $(X_i^t)_{t\in \mathbb{R}}$.
The {\it local unstable} (resp. {\it local stable}) manifold is formed by the points
$x\in \mathcal{M}_p$ whose negative (resp. positive) flow line
$X_i^t(x)$ goes to $p$ when $t$ goes to $-\infty$ (resp. $+\infty$) without getting out of $\mathcal{M}_p$.
Denote by $\Sigma^-$ the $(i-1)$-sphere which is formed by the points in the bottom of $\mathcal{M}_p$
which belong to $W_{loc}^u(p,X_i)$; that is
\begin{equation}
\Sigma^-=\{(x^-,x^+)\mid |x^-|^2=2\delta^*, |x^+|= 0\}.
\end{equation}
This is called the {\it attaching sphere}.
Similarly, $\Sigma^+$ denotes the {\it co-sphere}, the $(n-i-1)$-sphere which is contained
in the top of $\mathcal{M}_p$ and made of points belonging to $W_{loc}^s(p,X_i)$, that is
\begin{equation}
\Sigma^+=\{(x^-,x^+)\mid |x^-|= 0, |x^+|^2=2\delta^*\}.
\end{equation}
We will use the two projections associated with these coordinates:
\begin{equation}
\pi^+:\partial^+\mathcal{M}_p\to \Sigma^+ \quad\text{and} \quad \pi^-:\partial^-\mathcal{M}_p\to \Sigma^-.
\end{equation}
\subsection{\sc Simple homoclinic orbit, tube and orientation.} \label{ssec:tube}
Let $X$ be an adapted $\alpha$-gradient and let $\mathcal{M}_p$ be a Morse model adapted to $X$ about $p\in Z(\alpha)$.
A homoclinic orbit $\ell$ of $X$ based at $p$
is said to be {\it simple}
when at any point $m\in \ell$ the span $T_mW^u(p,X)+T_mW^s(p,X)$ is of codimension one in $T_mM$. When
$X$ is said to have a {\it unique} homoclinic orbit it will be meant that this orbit is simple, that is unique with multiplicity.
In this setting, denote by $\underline\ell$ the closure of $\ell\smallsetminus \mathcal{M}_p$; it will be named the
{\it restricted} homoclinic orbit. The end points of $\underline \ell$
are denoted respectively $a^-\in \Sigma^-$ and $a^+\in \Sigma^+$. We also introduce a {\it compact tube}
$T$ around $\underline \ell$ made of $X$-trajectories from $\partial^-\mathcal{M}_p$ to $\partial^+\mathcal{M}_p$. As $\ell$ is simple, if the tube is small enough there are coordinates on $T$ that we note $(x, y, v, z)\in \mathbb{R}^{i-1}\times \mathbb{R}^{n-i-1}\times[-1,1]\times[0,1]$ with the following properties: \begin{itemize} \item $X$ { is positively colinear to } $\partial_z$, \item $\{z=0\}= T\cap \partial^-\mathcal{M}_p$ and $\{z=1\}= T\cap \partial^+\mathcal{M}_p$; \item $T\cap\Sigma^-=\{y=0, v=0, z=0\}$ and $T\cap \Sigma^+=\{x=0, v=0, z=1\}$; \item $\underline\ell=\{x=0,y=0,v=0\}$; \item the frame $(\partial_x, \partial_y, \partial_v)$ is tangent to the leaves of $\alpha$. \end{itemize} In what follows, $\{z=0\}$ (resp. $\{z=1\}$) will stand for $T\cap \partial^-\mathcal{M}_p$ (resp. $T\cap \partial^+\mathcal{M}_p$).\\
Orient the unstable $W^u(p, X)$. Thus, the stable manifold $W^s(p, X)$ is co-oriented. Therefore, we can choose the coordinate $v$ in the tube { so that, for every $z_0\in [0,1]$, the following holds:} \begin{equation}\label{orientation1} \partial_v\wedge \text{or}\left(W^u(p, X)\cap\{z=z_0\}\right)= \text{co-or}(W^s(p,X)). \end{equation} If the orientation of $W^u(p, X)$ is changed, then the co-orientation of $W^s(p,X)$ is also changed and the above equation shows that the positive direction of $v$ remains unchanged.
\begin{remarque}\label{rem:holonomie} It is important to notice that (\ref{orientation1}) tells us nothing about the holonomy along $\underline\ell$ of the foliation defined by $X$ (see the next subsection). Therefore, for a given $\partial_v\in T_{a^+}(\partial^+\mathcal{M}_p)$, the tangent vector $\partial_v\in T_{a^-}(\partial^-\mathcal{M}_p)$ may have any position not contained in the hyperplane $\mathbb{R}\sing{\partial_x,\partial_y}$, depending on $X$.\\
\end{remarque}
\subsection{\sc Holonomy and perturbed holonomy.}\label{perturbed} The foliation of $M\smallsetminus Z(\alpha)$ by the orbits of $X$ together its two transversals $\partial ^\pm\mathcal{M}_p$ defines a {\it holonomy} diffeomorphism $H_X: \mathcal N^-_X\to \mathcal N^+_X$,
\break that is: \begin{itemize} \item $\mathcal N^-_X$ is an open connected neighborhood of $\{z=0\}$ in $\partial ^-\mathcal{M}_p$; \item $\{z=1\}\subset \mathcal N^+_X\subset \partial^+\mathcal{M}_p$; \item the restriction of $H_X$ to $\{z=0\}$ is defined by $(x,y,v,0)\mapsto (x,y,v,1)$; \item for every $a\in \mathcal N^-_X$, the image $H_X(a)$ belongs to the $X$-orbit of $a$. \end{itemize} By the connectedness of $\mathcal N^-_X$, the time of the flow $X^t$ for going from $a$ to $H_X(a)$ is continuous and hence smooth.
The existence of such holonomy diffeomorphism is an open property with respect to $X$. More precisely, if $X'$ is a close enough approximation of $X$ in the $C^1$-topology, there is a {\it perturbed holonomy} diffeomorphism $H_{X'}$ from an open neighborhood
$ \mathcal N^-_{ X'}$
of $\{z=0\}$ in $\partial^-\mathcal{M}_p$ to an open neighborhood $\mathcal N^+_{X'}$ of $\{z=1\}$ in $\partial^+\mathcal{M}_p$.
\begin{remarque} \label{batons}
It makes sense to speak of $H_{X'}^{-1}(\Sigma^+)\cap \{z=0\} $. It is an $(n-i-1)$-disc $C^1$-close to the $y$-axis in $\{z=0\}$. Similarly, it makes sense to speak of $H_{X'}(\Sigma^-)\cap \{z=1\} $. It is an $(i-1)$-disc close to the $x$-axis in $\{z=1\}$. \end{remarque}
We now state and prove the first item of Theorem \ref {thm1}. Let $p\in Z(\alpha)$. For every $g\in \pi_1(M,p)$, we consider $\mathcal{S}_g\subset \mathcal{F}_\alpha$, the set of adapted $\alpha$-gradients which have a unique homoclinic orbit
forming a loop based at $p$ in the homotopy class $g$; the existence of a broken homoclinic orbit is excluded
from $\mathcal{S}_g$. Recall $u$, the cohomology class of
the closed form $\alpha$; if the evaluation $u(g)$ is non-negative then $\mathcal{S}_g$ is empty.
\begin{prop} \label{item1} For every $g\in \pi_1(M,p)$,
the subset $\mathcal{S}_g\subset \mathcal{F}_\alpha$ is a $C^\infty$ codimension-one submanifold of $\mathcal{F}_\alpha$, that is
$\mathcal{S}_g$ is locally
defined by a regular real valued equation.
This stratum has a canonical co-orientation. \end{prop}
\nd {\bf Proof.\ } Let $ X_0$ be any point in $\mathcal{S}_g$. Let $\ell$ denote
the homoclinic orbit which forms a loop whose class $g\in \pi_1(M,p)$.
We intend to find a regular real valued equation for $\mathcal{S}_g$ near $ X_0$.
From Remark \ref{alexander} we have the two following properties:
\begin{itemize}
\item the local $C^\infty$ stability near $p$ of the adapted $\alpha$-gradients ;
\item the acyclicity of the space of Morse models adapted to $X_0$ near $p$.
\end{itemize}
Therefore, the action of the group $\text{Diff} (M)$ on $\mathcal{F}_\alpha$ reduces us to consider a local {\it slice}
$S\subset \mathcal{F}_\alpha$
for this action and to look for the smoothness of $\mathcal{S}_g\cap S$. Namely, choose a Morse model $\mathcal{M}_p$
adapted to $X_0$ and define $S:=\{X\in \mathcal{F}_\alpha \mid X=X_0 \ \text{in}\ \mathcal{M}_p\}$. Thanks to the stability
property above-mentioned,
this $S$ is indeed a local slice for the action of $\text{Diff} (M)$.
We use the tube $T$ and its coordinates as introduced in Subsection \ref{ssec:tube}.
The Implicit Function Theorem allows us to follow continuously, for $X$ close to $X_0$,
a connected component $D(X)$ of $W^u(p,X)\cap \{z=1\}$ which coincides with
$\{y=0,v=0, z=1\}$ when $X= X_0$. Let
\begin{equation}
p_v:\sing{z=1}\to\sing{ v=0, z=1}
\end{equation} denote the projection parallel to $\partial_v$ onto the $(x,y)$-space. The image $p_v(D(X))$
is transverse to $\Sigma^+$.
The intersection is a point $a^+(X)$ which depends $C^\infty $ on $X$. Let $b(X)$
be the point of $D(X)$ which has the same coordinates as $a^+(X)$ except the last coordinate $v$.
Thus, the desired equation is
\begin{equation}\label{eq-homoclinic}
v(b(X))=0\,.
\end{equation}
This is clearly a $C^\infty $ equation. For proving this equation is \emph{regular} it is sufficient to exhibit
a one-parameter family $(X_s)_{s\in\mathcal{O} p(0)}$ passing through $X_0$ and satisfying the following inequality:
\begin{equation}\label{trans}
\partial_s\left(v(b(X_s)\right)_{\vert s=0}> 0.
\end{equation}
This is easy to perform by taking
$$
X_s=X_0 + s\,g(x,y, z,v)\,\partial_v
$$
where $g$ is a small non-negative, supported in the interior of the tube $T$ and has a positive integral along
$\underline\ell$\,.
Let us check that such $(X_s)$ fulfils (\ref{trans}). Indeed, at every point of the tube $T$ the vector $X_s$ is in the
span$\{\partial_v, \partial_z\}$. Therefore, if $H_s$ denotes the perturbed holonomy along $\underline \ell$
of the flow of $X_s$ then
$b(X_s)=H_s(a^-)$ and $v(b(X_s))=s\int_0^1g\, dz$. Hence, (\ref{trans}) is fulfilled.
Right after (\ref{orientation1}) we noticed that the positive direction of $v$ does not depend on
the chosen orientation of the unstable manifolds. Therefore,
(\ref{trans}) defines a canonical co-orientation of $\mathcal{S}_g$.
What we have done is not sufficient for proving the statement. Equation (\ref{eq-homoclinic}) only solves the
question of existence of a homoclinic orbit at $p$ close to $\ell$. We still have to prove that $\mathcal{S}_g$
does not accumulate to itself\footnote{as does a leaf of the irrational linear foliation on the 2-torus.} near $X_0$.
More precisely, it does not exist a sequence $X_k\in \mathcal{S}_g$ converging to $X_0$ such that
\begin{equation}\label{hyp-v}
v(b(X_k))\neq 0\ \text{ for every }k.
\end{equation}
Assume such a sequence exists. Let $\ell_k$ be the unique homoclinic orbit of $X_k$ based at $p$.
As the sequence $\left(X_k\right)$ is close to $X_0$, the $C^0$-norm of $X_k$ is uniformly bounded
and then the family $\left(\ell_k\right)$ is equicontinuous.
By Ascoli's Theorem, there is a sub-sequence converging $C^0$ to some line $\ell_\infty$ from $p$
to $p$.
Let us show that $\ell_\infty$ is a (possibly broken) homoclinic orbit of $X_0$.
Indeed, let $(x_k)$ be a sequence of points with $x_k\in \ell_k$ converging to $x_\infty\in \ell_\infty$. Let $X_k^t$
be the flow of $X_k$. For every given $t$, the sequence $\left(X_k^t(x_k)\right)$ converges to $X_0^t(x_\infty)$
since $X_k$ tends to $X_0$ as $k$ goes to $+\infty$. Therefore, the piece of $\ell_\infty$ between $x_\infty$
and $X_0^t(x_\infty)$ is contained in the $X_0$-orbit of $x_\infty.$
If $x_k$ is close to $\ell$, then the restricted homoclinic orbit $\underline{\ell}_k$ lies in the
tube $T$, and hence $v(b(X_k))=0$, contradicting our assumption (\ref{hyp-v}). Therefore, there exists
a small tube $T'$ around $\ell$ such that $\ell_k$ avoids $T'$ for every $k$ and the $C^0$-limit
$\ell_\infty$ as well. Finally, we get two distinct homoclinic orbits of $X_0$ based at $p$,
one of them being possibly broken. This is excluded by the very definition of $\mathcal{S}_g$.
$\Box$\\
\begin{defn}\label{df:PositTrans}
Let $(X_s)_{s\in \mathcal{O} p(0)}$ be a one-parameter family
of adapted $\alpha$-gradients with $X_0\in \mathcal{S}_g$\,.
This family is said to be
\emph{positively transverse} to the stratum $S_g$ if it satisfies \rm{(\ref{trans})}.
\end{defn}
Let $(X_s)_{s\in \mathcal{O} p(0)}$ be such a one-parameter family
and let $H_s$ be the perturbed holonomy along $\underline \ell$ of the flow of $X_s$.
Below, we use the coordinates $(x,y,v)$ both in $\{z=0\}$ and in $\{z=1\}$.
For further use,
we are interested in the local solution $x_s$ of the equation \begin{equation}\label{x_s} (x\circ H_s)(x,0,0)=0 \end{equation} which is equal to $0$ when $s=0$. In this equation, the unknown is the unique point of the $x$-space in $\{z=0\}$ whose image through $H_s$ is in the $y$-space of $\{z=1\}$.
And similarly, we consider the solution $y_s$ of the equation \begin{equation}\label{s_s} (y\circ H_s^{-1})(0,y,0)=0 \end{equation} which is equal to $0$ when $s=0$.
\begin{lemme}\label{velocity} With the above data and notations, the following equality holds: \begin{equation} \partial_s \left( v\circ H_s\right)(x_s,0,0)_{\vert s=0}+ \partial_s\left(v\circ H_s^{-1}\right)(0,y_s,0)_{\vert s=0}=0\,. \end{equation}
\end{lemme}
\nd {\bf Proof.\ } A Taylor expansion gives $$ (v\circ H_s)(x_s,0,0)= (v\circ H_s)(0,0,0)+O(s^2)\,. $$ That follows from the fact that the velocity of $x_s$ is a vector which is contained in the kernel of $dv$. Similarly, we have: $$ (v\circ H_s^{-1})(0,y_s,0)= (v\circ H_s^{-1})(0,0,0)+ O(s^2)\,. $$ Observe that $H_0(x,y,v)= (x,y,v)$. Thus, derivating the composed map $H_s^{-1}\circ H_s= Id$ with respect to $s$ at $s=0$ yields: $$\partial_s H_s^{-1}(0,0,0)_{\vert s=0}+\partial_s H_s(0,0,0)_{\vert s=0}=0\,. $$ Altogether, we get the desired formula.
$\Box$\\
\subsection{\sc Equators, signed hemispheres and
latitudes. \label{ssec:Equators}}${}$
We introduce some useful notations. Let $\mathbb{D}^k$, { $ k\geq 1$,}
be the closed Euclidean disc of dimension $k$ and radius 1
equipped with spherical coordinates $(r, \theta)\in [0,1]\times \mathbb{S}^{k-1}$. A point $\theta\in \mathbb{S}^{k-1}$
will also be viewed as unit vector $\theta\in T_0\mathbb{D}^k$.
Suppose that we are given a preferred co-oriented hyperplane $\Delta\subset T_0\mathbb{D}^k$. It determines a \emph{preferred co-oriented equator} $E^\Delta\subset \mathbb{S}^{k-1}$. The oriented normal to $\Delta$ determines two poles on the sphere: the \emph{North pole} $\nu_\Delta$ on the positive side of $\mathbb{D}^k$ and the \emph{South pole} $\sigma_\Delta$ on the negative side; and two open hemispheres of $\mathbb{S}^{k-1}$ respectively noted $\mathcal{H}^+(\mathbb{S}^{k-1})$ and $\mathcal{H}^-(\mathbb{S}^{k-1})$.
Any point $\theta\in\mathbb{S}^{k-1}$ determines an angle with respect to the North pole $\nu_\Delta$. The cosinus of this angle defines a \emph{latitude} $\cos_\Delta:\mathbb{S}^{k-1}\to [-1,1]$ defined by the scalar product \begin{equation}\label{eq:Trigo} \cos_\Delta(\theta):=\langle \nu_\Delta,\theta\rangle. \end{equation}
\begin{prop}\label{prop:Latitudes}
Every $ X$ in $\mathcal{S}_g$ defines
a preferred latitude on
both the attaching sphere $\Sigma^-$ and the co-sphere $\Sigma^+$.
\end{prop}
For this aim, we use
multispherical coordinates $(\phi,r,\psi)\in \mathbb{S}^{i-1}\times [0,1]\times \mathbb{S}^{n-i-1}$ on
each level set of $\mathcal{M}_p$ (not well defined on the local stable/unstable manifolds).
We recall the map \begin{equation}\label{eq:Desc}
Desc:\partial^+\mathcal{M}_p\smallsetminus \Sigma^+\to \partial^-\mathcal{M}_p\smallsetminus \Sigma^- \end{equation} obtained by descending the flow lines in $\mathcal{M}_p$. This map reads $Id$ in these coordinates.
The preferred latitude that we are going to define on $\Sigma^-$ and $\Sigma^+$ will be called respectively the \emph{$\phi$-latitude}
and the \emph{$\psi$-latitude}. We insist that these functions depend on $X\in \mathcal{S}_g$. We denote them by \begin{equation}\label{eq:Latitudes}
\cos_\phi^X:\Sigma^-\to [-1,1]\quad\text{and}\quad \cos_\psi^X:\Sigma^+\to [-1,1]. \end{equation}
When the vector field is clear from the context, these functions will just be denoted $ \cos_\phi$ and $ \cos_\psi$.
We shall decorate all the data related to $ \cos_\phi$ or $ \cos_\psi$ by using
the letter $\phi$ or $\psi$ respectively;
namely, the preferred hyperplane $\Delta^\phi$, the preferred equator $ E^\phi\subset \Sigma^-$,
the poles $ \nu_\phi$ and $\sigma_\phi $ in $\Sigma^-$, and so on.\\
\nd {\bf Proof.\ } Take any $X$ in $\mathcal{S}_g$ and denote by $\ell$ its unique homoclinic orbit from $p$ to itself. The end point $a^+$ of $\underline\ell$ has coordinates $a^+=( -,0,\psi_0)$; as usual with polar coordinates, when the radius is 0 the spherical coordinate is not defined. Let \begin{equation}\label{eq:PsiProj} \pi^{\psi_0}: \partial^+\mathcal{M}_p \to\{\psi=\psi_0\},\quad \pi^{\psi_0}(\phi, r,\psi)= (\phi, r,\psi_0), \end{equation}
be the projection onto the meridian $i$-disc.
{ Let $H: \{z=0\}\to \{z=1\}$ denote the holonomy diffeomorphism defined by the vector field $X$
in the tube $T$. The image of $T\cap\Sigma^- $ through $H$
is a $(i-1)$-disc $D\subset \partial^+\mathcal{M}_p$.} Due to the transversality condition associated with $\ell$,
this disc is a graph over
its projection $D_{\psi_0}:=\pi^{\psi_0}(D)$ { if the tube is small enough around $\underline\ell$.} Then,
\begin{equation}\label{eq:HypPhi} \Delta^\phi:=T_{a^+}D_{\psi_0}\subset T_{a^+}\{\psi=\psi_0\}\text{ is the preferred hyperplane}. \end{equation} As we noticed in Remark \ref{rem:holonomie}, the vector $\partial_v\in T_{a^+}\partial^+\mathcal{M}_p$ is neither tangent to $\Sigma^+$ nor to $D$, which implies that
\begin{equation} \label{positive_side}
\parent{d\pi^{\psi_0}}_{a^+}(\partial_v) \text{ defines { a co-orientation of }} \Delta^\phi { \text{ in }
T_{a^+}\{\psi=\psi_0\} .}
\end{equation} This provides a preferred latitude on the $\phi$-sphere $\partial\{\psi=\psi_0\}\cong \mathbb{S}^{i-1}\times\{1\}\times\{\psi_0\}$. By the canonical isomorphism
\begin{equation}\label{eq:IdentPhi} \mathbb{S}^{i-1}\times \{1\}\times\{\psi_0\} \cong \mathbb{S}^{i-1}\times \{0\}\times\{-\} \cong \Sigma^-,
\end{equation} the preferred latitude on $\partial\sing{\psi=\psi_0}$ descends to some $\phi$-latitude which defines the announced $c^X_{\phi}:\Sigma^-\to [-1,1]$ in \eqref{eq:Latitudes}. The $\phi$-equator is the locus $\{c^X_\phi=0\}$, the North pole is $\nu_\phi=(c^X_\phi)^{-1}(1)$ etc.
For the $\psi$-latitude on the co-sphere $\Sigma^+$, we do the same construction by using
the reversed flow and its holonomy $H^{-1}$. More precisely,
take the image of $T\cap \Sigma^+$ through $H^{-1}$; it is a
$(n-i-1)$-disc $D'\subset \partial^-\mathcal{M}_p$ centered in $a^-$ whose spherical coordinates are $ a^- = (\phi_0, 0, -) $. Let
\begin{equation}\label{eq:PhiProj}
\pi^{\phi_0}: \partial^-\mathcal{M}_p \to\{\phi=\phi_0\},\quad \pi^{\phi_0}(\phi, r,\psi)= (\phi_0, r,\psi),
\end{equation}
be the projection onto the meridian $(n-i)$-disc
and let $D_{\phi_0}$ be the image $\pi^{\phi_0}(D')$. Hence,
\begin{equation}\label{eq:HypPsi} \Delta^\psi:=T_{a^-}D_{\phi_0}\subset T_{a^-}\{\phi=\phi_0\} \text{ is the preferred hyperplane.} \end{equation}
Moreover, $\partial_v\in T_{a^-}\partial^-\mathcal{M}_p$ is neither tangent to $\Sigma^-$ nor to $D'$, which implies that \begin{equation}\label{eq:PositSi+} \parent{d\pi^{\phi_0}}_{a^-}(\partial_v)\text{ defines a co-orientation { of} } \Delta^{\psi} { \text{ in } T_{a^-}\{\phi=\phi_0\}.} \end{equation} This yields a preferred latitude on the sphere $\partial\sing{\phi=\phi_0}$, that can be carried to $\Sigma^+$ by means of the canonical isomorphism
\begin{equation}\label{eq:IdentPsi} \sing{\phi_0}\times\{1\}\times \mathbb{S}^{n-i-1}\cong \sing{-}\times\sing{0}\times \mathbb{S}^{n-i-1} \cong \Sigma^+.
\end{equation} This defines the announced $\psi$-latitude $ \cos_\psi^X$ in \eqref{eq:Latitudes}.
$\Box$\\
\subsection{\sc Holonomic factor and character function.}
By construction of the $\psi$-latitude and the $\psi$-latitude, we have the following splittings: \begin{equation}\label{eq:decompTgt} \begin{cases} T_{a^-}\{z=0\} = T_{a^-}\Sigma^-\oplus({\mathbb{R}\nu_{\psi}}\oplus \Delta^\psi),\\ T_{a^+}\{z=1\} = (\Delta^\phi\oplus\mathbb{R}{\nu_\phi})\oplus T_{a^+}\Sigma^+\, . \end{cases} \end{equation}
Given $X\in\mathcal{S}_g$, {recall the homoclinic orbit $\ell$ whose homotopy class is $g\in \pi_1(M, p)$ and all associated objects that we introduced in Subsection \ref{ssec:tube}: the tube $T$, its coordinates $(x,y,v,z)$ and the holonomy diffeomorphism $H: \{z= 0\} \to \{z= 1\}$. It reads $Id$ in the $(x,y,v)$-coordinates and $H(a^-)=a^+$. We are free to choose the coordinates of the tube such that the unit tangent vector $\partial_v^1:=\partial_v\in T_{a^+}\{z=1\}$ verifies \begin{equation}\label{eq:ChoiceVAxis} \partial_v^1={\nu_{\phi}}. \end{equation}
The \emph{linearized holonomy} $T_{a^+}H^{-1}$
maps $\partial_v^1$ to $\partial_v^0:=\partial_v\in T_{a^-}\{z=0\}$. By \eqref{eq:decompTgt}, the latter vector
decomposes as \begin{equation}\label{eq:HolonDec} \partial_v^0=v_x+\bar{\eta}\,{\nu_\psi}+v_y,\text{ where }v_x\in T_{a^-}\Sigma^-, \, v_y\in \Delta^\psi,\, \bar{\eta}\in\mathbb{R}. \end{equation} As we pointed in Remark \ref{rem:holonomie}, the only restriction on the holonomy of $X$ along $\underline{\ell}$ is that $\bar{\eta}\neq 0$. Moreover, { according to \eqref{eq:PositSi+}, the vector $\partial_v^0$ defines the positive side of the preferred hyperplane $\Delta^\psi$. As a consequence, }
$\bar{\eta}$ must be positive.
\begin{defn}\label{def:HolFactor} The holonomic factor associated with $X$ is the positive real number given by \begin{equation} \eta( X):=\,\frac{1}{\bar{\eta}}\,>\,0\, . \end{equation} \end{defn} The following subsets of $\mathcal{S}_g$
are respectively called the \emph{$\phi$-axis} and the \emph{$\psi$-axis} of $\mathcal{S}_g$:
\begin{equation}\label{axis} \mathcal{S}_g^{\phi}:=\ens{X\in\mathcal{S}_g}{a^+(X)\in E^\psi}\,\text{ and }\quad\mathcal{S}_g^{\psi}:=\ens{X\in\mathcal{S}_g}{a^-(X)\in E^\phi}, \end{equation} that we also call the \emph{spherical axes}. Here, $a^+(X)$ and $a^-(X)$ stand for the respective extremities of the restricted orbit $\underline\ell$, where $\ell$ is the unique homoclinic orbit of $X$ in the homotopy class $g$. Denote the intersection of the axes by: \begin{equation} \mathcal{S}_g^{0,0}:=\mathcal{S}_g^\phi\cap\mathcal{S}_g^\psi \end{equation} which is empty when one axis is so.
\begin{remarque} \label{empty-axis}When the Morse index of $\mathcal{S}_g$ is equal to 1, then the $\phi$-equator is empty but there are still signed poles. In that case, the $\phi$-latitude takes only the values $\{-1,+1\}$ and the $\psi$-axis is empty. When the Morse index of $\mathcal{S}_g$ equals $n-1$, then the $\psi$-equator and the $\phi$-axis are empty and the $\psi$-latitude is valued in $\{-1,+1\}$. If $n>2$, these two events do not happen simultaneously. This is the reason for the dimension assumption in Theorem \ref{thm1} (2).
\end{remarque}
We are now ready for defining the important notion of \emph{character function} togegther with its following ingredients. In order to simplify notations, we introduce the {\it extended $\phi$- and $\psi$-latitudes}
to $\mathcal{S}_g$ by setting for every $X\in\mathcal{S}_g$: \begin{equation} \label{ext-latitude} \omega_\phi(X):= \cos_\phi^X(a^-(X)) \quad \text{and}\quad \omega_\psi(X):= \cos_\psi^X(a^+(X)). \end{equation}
\begin{defn}\label{def:Character} The \emph{character function} $\chi:\mathcal{S}_g\to\mathbb{R} $ is defined by: \begin{equation}\label{eq:CharVal} \chi(X):=\eta( X)\, \omega_\psi(X)+ \omega_\phi(X)
\end{equation} Define $\mathcal{S}_g^0$ (resp. $\mathcal{S}_g^+$, $\mathcal{S}_g^-$) as the locus where $\chi$ vanishes (resp. is positive, is negative).
\end{defn}
By the very definition of the latitudes, it is clear that each axis intersects $\mathcal{S}_g^0$ along $\mathcal{S}_g^{0,0}$, as Figure \ref{fig:Sg0etAxes} suggests (compare Figure \ref{fig:Sg0dsSg}). \begin{figure}
\caption{The substratum $\mathcal{S}_g^{0,0}\subset \mathcal{S}_g^0$ as the intersection of the $\phi$-axis with the $\psi$-axis.}
\label{fig:Sg0etAxes}
\end{figure}
Below, we start giving some information about $\mathcal{S}_g^{0,0}$ and $\mathcal{S}_g^0$ from which Theorem \ref{thm1} will be completely proved.
\begin{prop}\label{item2} ${}$\\ \noindent {\rm 1)} The axes $\mathcal{S}_g^\phi$ and $\mathcal{S}_g^\psi$ are $C^\infty$ submanifolds of codimension 1 in $\mathcal{S}_g$. Moreover, when they are both non-empty their intersection $\mathcal{S}_g^{0,0}$ is non-empty and transverse. Hence, $\mathcal{S}_g^{0,0}$ is a $C^\infty$ submanifold of codimension 2 in $\mathcal{S}_g$.
\noindent {\rm 2)} If $n>2$
the zero set $\mathcal{S}_g^0= \chi^{-1}(0)$ of the character function is a \emph{non-empty} co-oriented $C^\infty$ submanifold of codimension 1
in \emph{each} connected component of $\mathcal{S}_g$.
\end{prop}
\nd {\bf Proof.\ }
Let $i=i(p)$ denote the Morse index of the zero $p\in Z(\alpha)$ where $g$ is based.\\
1) The equation of the $\psi$-axis $\mathcal{S}_g^\psi$ in $\mathcal{S}_g$ reads
with the notations introduced in (\ref{ext-latitude}) and (\ref{axis}): $$
\omega_\phi(X)= 0.
$$
If the index $i$ is equal to 1, by Remark \ref{empty-axis}, the $\psi$-axis is empty and there is nothing to prove. If not,
let $X_0\in \mathcal{S}_g^\psi$. We have to exhibit a germ
$(X_s)_{s\in \mathcal{O} p(0)}$
of path in $\mathcal{S}_g$ passing through $X_0$ such that the $s$-derivative of
$ \omega_\phi(X_s)$ at $s=0$ is non-zero. Let $H_s$ be the local
holonomy diffeomorphism
of $X_s$ from a neighborhood of $\sing{z=0}$ in $\partial^-\mathcal{M}_p$ to a neighborhood of $\sing{z=1}$ in $\partial^+\mathcal{M}_p$.
Let $a^-=(\phi_0, 0,-)$ and $a^+=(-,0, \psi_0)$ be the end points of the restricted homoclinic
orbit $\underline\ell$ of $X_0$.
We arrange that $H_s$ keeps the $\pi^{\psi_0}$-projection of $H_s(\Sigma^-)$
into the { meridian $\{\psi=\psi_0\}\subset \partial^+\mathcal{M}_p$ }
independent of $s$. Thus, the equator $E^\phi$ is so and the $\phi$-latitude $\cos_\phi$
does not depend on $s$.
Therefore, we are reduced to
control the $s$-derivative of $ \cos_\phi(a^-(X_s))$.
We recall that every germ of isotopy of the holonomy $H_0$ lifts to a deformation of $X_0$.
Then,
we are free to choose the holonomy so that $s\mapsto a^-(X_s)\in\Sigma^-$ crosses the non-empty equator $E^\phi$
transversely at time $s=0$.
Thus, we are done.
For a similar reason, the equation $ \omega_\psi(X)= 0$ of the { $\phi$-axis} $\mathcal{S}_g^\phi$ is regular.
Let us show the property of $\mathcal{S}_g^{0,0}$ when both axis are non-empty. In that case, the Morse index verifies $1<i<n-1$, and it is available to have $\cos_\phi(a^-)=\cos_\psi(a^+)=0$, that is $\mathcal{S}_g^{0,0}\neq\emptyset$. Let any $X_{0,0}\in \mathcal{S}_g^{0,0}$. We choose a 2-parameter family $X_{s,u}\in \mathcal{S}_g$ whose holonomy $H_{s,u}$
satisfies the following conditions:
\begin{enumerate}
\item the equator $E^\phi$ is independent of $s$ when $u=0$ and $\partial_s \cos_\phi(a^-(X_{s,0}))>0$;
\item the equator $E^\psi$ is independent of $u$ when $s=0$ and $\partial_u \cos_\psi(a^+(X_{0,u}))>0$:
\item for every $(s,u)$ close to $(0,0)$, we have $a^-(X_{s,u})\in \Sigma^-$ and $a^+(X_{s,u})\in \Sigma^+$.
\end{enumerate}
Condition (3) guarantees that $X_{s,u}$ runs in $\mathcal{S}_g$. Thanks to (1) and (2), the evaluation map
$(s,u)\mapsto (a^-(X_{s,u}), a^+(X_{s,u}))\in \Sigma^-\times\Sigma^+$ is transverse to the submanifold
$E^\phi\times E^\psi$. This proves that the system of equations defining $\mathcal{S}_g^{0,0}$ near $X_{0,0}$,
namely $ \left\{
\omega_\phi(X)=0, \ \omega_\psi(X))=0
\right\}$, is of rank 2.\\
\noindent 2) First, let us prove that the equation $\chi(X)=0$ has a solution in each connected component of $\mathcal{S}_g$.
Let $X\in \mathcal{S}_g$ and let $a^+$ and $a^-$ be the corresponding end point of its restricted homoclinic
orbit $\underline\ell$. Any move of these points in their respective spherelifts to a deformation of $X$ in the space of adapted $\alpha$-gradients.
If $\Sigma^+$ and $\Sigma^-$ are both connected, there is such a move until $a^-$ and $a^+$ lie in the
equators of their respective sphere. Then, $X$ is deformed in $\mathcal{S}_g$ until it lies in $\mathcal{S}_g^{0,0}\subset \mathcal{S}_g^0$; this
answer the question in this case.
As $n>2$, one of the spheres $\Sigma^-$ and $\Sigma^+$ is not 0-dimensional, say $\Sigma^-\neq \mathbb{S}^0$.
Then, one can move $X$ in $\mathcal{S}_g$ and modify the holonomic factor
by some homothety for making it less than 1; secondly, knowing that $ \omega_\psi(X)=\pm 1$,
one moves $a^-$ in $\Sigma^-$ and changes $X$ accordingly, keeping the holonomic factor constant,
up to reach the locus $\chi(X)=0$. So, $\mathcal{S}_g^0$ is visible in each connected component of $\mathcal{S}_g$.
It remains to prove that the equation $\chi(X)=0$ is regular everywhere. For every
$X_0\in \mathcal{S}_g^0$ we have to exhibit
a germ of path $(X_s)_s$ in $\mathcal{S}_g$ passing through $X_0$ such that
$\partial_s\chi(X_s)>0$.
Let $\underline \ell$ be the restricted homoclinic orbit of $X_0$; let $a^+\in \Sigma^+$ and $a^-\in \Sigma ^-$ be its end points.
First, we arrange that the equators $E^\phi $ and $E^\psi$ do not depend on $s$
by requiring that the holonomy $H_s$ along the homoclinic orbit $\ell$ of $X_0$
fulfils the following conditions:
\begin{itemize}
\item for every $s$, there is a homoclinic orbit $\ell_s$ (the end points of the restricted orbit $
\underline\ell_s$ are noted $a^-(X_s)$ and $a^+(X_s)$);
\item the $\pi^{\psi_0}$-projection of $H_s(\Sigma^-)$
into the meridian $\{\psi=\psi_0\}$ is constant;
\item the $\pi^{\phi_0}$-projection of
$(H_s)^{-1}(\Sigma^+)$ into the meridian $\{\phi=\phi_0\}$
is constant.
\end{itemize}
Now, there are two cases depending on whether $ \cos_\psi(a^+(X_0))$ is equal to 0 or not.
If $ \cos_\psi(a^+(X_0))$ is not 0,
the germ of $H_s$ at $a^-(X_0)$ is chosen to be a contraction: its
center is $a^+(X_0)$
and its factor is $e^s$ (in the coordinate $(x,y,v)$ of the extremity $\{z=1\}$ of the tube $T$
around $\underline\ell$).
Notice that such a contraction preserves the above requirements for the constancy of the equators.
Then, a calculation shows that the holonomic factor
is multiplied by the same factor, which implies that $\partial_s\chi(X_s)>0$ since $a^\pm(X_s)$ is constant.
Finally, we have to solve the case when $ \cos_\psi(a^+(X_0))=0$. Here, we arrange the holonomy
$H_s$ so that $\partial_s \cos_\psi(a^+(X_s))>0$ and $\partial_s \cos_\phi(a^-(X_s))=0$, which again
implies $\partial_s\chi(X_s)>0$ since $\eta( X_s)>0$.
This finishes the proof of Proposition \ref{item2}.
$\Box$\\
After Proposition \ref{item1} and Proposition \ref{item2} we have a complete proof
of Theorem \ref{thm1}.
$\Box$\\
\subsection{\sc Normalization of crossing path.}
The normalization in question will be used for proving Theorem \ref{thm:selfslideSimplif}
and Theorem \ref{thm:Doubling}. The normalization is achieved by making some group act
on $M$.
At the end of the subsection it will be proved that the stratification $(\mathcal{S}_g, \mathcal{S}_g^0, \mathcal{S}_g^{0,0})$
is invariant under this action.
In this subsection we use notations as $D_1(0), C_1(0)$ which will be used repeatedly
in Section \ref{section3} (see Notation \ref{not:C1(s)}).
Consider the image $H_0(\Sigma^-\cap\{z=0\})\subset \partial^+\mathcal{M}_p$
by the holonomy map of $X_0$
along its homoclinic orbit $\ell$ in the homotopy
class $g\in \pi_1(M,p)$. Let us define $$D_1(0):= H_0(\Sigma^-)\cap\{z=1\}\ \text{ and }\ C_1(0):= Desc(D_1(0))$$ where $Desc$ is the descent map defined in (\ref{eq:Desc}). We recall $a^+=\ell\cap \Sigma^+$ (resp. $a^-= \ell\cap \Sigma^-$) whose spherical coordinate is noted $\psi_0$ (resp $\phi_0$).
\begin{defn} \label{normalization} ${}$ A crossing path $(X_s)_{s\in \mathcal{O} p(0)}$ of $\mathcal{S}_g$ is said to be normalized if it fulfills the following requirements for every $s$, where $H_s$ denotes the perturbed holonomy of $X_s$. \begin{enumerate} \item $D_1(0)$ has to be contained in the preferred hyperplane $\Delta^\phi$ of the meridian $i$-disc
\break $\{\psi=\psi_0\}$.
\item Let $R_{a^-}$ denote the half ray $\{\phi= \frac \pi2 sign(\cos_\phi(a^-)),\, r\in [0,1], \psi=\psi_0\}$.
The curve $J_0:=H_0^{-1}(R_{a^-})$ has to be contained in the meridian disc $\{\phi=\phi_0\}$ of $\partial^-\mathcal{M}_p$. \item The disc $D'(s):= H_s^{-1}(\Sigma^+)\cap\{z=0\}$ has to move in the same meridian disc $\{\phi=\phi_0\}$.
\end{enumerate}
\end{defn}
Only the first item will be used to prove Theorem \ref{thm:selfslideSimplif}; the two other items enter the proof of Theorem \ref{thm:Doubling}. The main tool for {\it normalization by conjugation} (see Proposition \ref{conjugation}) is given by the next lemma about diffeomorphisms of $\mathcal{M}_p$. Its proof by Taylor expansion is detailed in the Appendix to \cite{flX}.
\begin{lemme} \label{C^1}
Let $K$ be a $C^1$-diffeomorphism of $\partial^+\mathcal{M}_p$ of the form $(\phi, r,\psi)\mapsto
(\phi, r, k(\phi,r,\psi))$ with
$k(\phi,0,\psi)=\psi$.
Then, $K$ uniquely extends to $\mathcal{M}_p$ as a $C^1$-diffeomorphism
which is the identity on both stable and unstable local manifolds
and which keeps
the standard vector field $X_i$ invariant. Moreover, the extension ${\overline K}$
is $C^1$-tangent to $Id$ along the attaching sphere $\Sigma^-$.
\end{lemme}
It is worth noting that the extension cannot be $C^2$ in general, even if $K$ is $C^\infty$.
This lemma can be also used by interchanging the roles of $\partial ^+\mathcal{M}_p$ and $\partial ^-\mathcal{M}_p$
and simultaneously the roles of $\phi$ and $\psi$.
\begin{prop}\label{conjugation} Given a positive crossing path $(X_s)_s$ of the stratum $\mathcal{S}_g$,
there exists a $C^1$-diffeomorphism $\overline K$ of $M$, isotopic to $Id_M$ among the
$C^1$-diffeomorphisms keeping $\alpha$ and $\mathcal{M}_p$ invariant,
such that the crossing path $(\overline K_* X_s)_s$ carried by $\overline K$ is normalized.
Moreover, $\overline K$ may be chosen so that it preserves $\ell$ pointwise.
\end{prop}
Notice that the vector field $(\overline K_* X_s)$ might only be $C^0$. But it is integrable
and the associate foliation is transversely $C^1$;
its holonomy is changed by $C^1$-conjugation.\\
\nd {\bf Proof.\ }
\noindent 1) We first look for a diffeomorphism $\overline K$ of $M$ carrying $(X_s)_s$ to a crossing path which fulfils
the first item of Definition \ref{normalization}.
If the tube $T$ around $\ell$ is small enough $D_1(0)$ is nowhere tangent to the fibres
of the projection $\pi^{\psi_0}$
to the meridian disc $\{\psi=\psi_0\}$. As a consequence,
its projected disc $D_{\psi_0}$ is smooth and there exists a smooth map $\bar k:D_{\psi_0}\to \Sigma^+$
such that $D_1(0)$ reads
$$D_1(0)= \{(\phi,r,\bar k(\phi,r))\mid (\phi,r)\in D_{\psi_0}\}.$$
Since its source is contractible, $\bar k$ is homotopic to the constant map valued in $\psi_0$.
By isotopy extension preserving the fibres of $\pi^{\psi_0}$, there exists some
diffeomorphism $K_1$ of $\mathcal{M}_p$ of the form assumed in
Lemma \ref{C^1} which maps the given $D_1(0)$ to $D_{\psi_0}$. Therefore,
this $K_1$ extends to $\mathcal{M}_p$
Since $K_1$ is isotopic to $Id$ through diffeomorphisms of the same type, its extension
$\overline K_1$ to $\mathcal{M}_p$ also extends to $M$ with the same name. Moreover,
the isotopy of $\overline K_1$ to $Id_M$ is supported in a neighborhood of $\mathcal{M}_p$
and preserves each level set of a local primitive of $\alpha$. Since $\Sigma^\pm$ are kept fixed by
$\overline K_1$, it is easy to get that $\ell$ is fixed by $\overline K_1$.
After having carried $X_0$ by this $\overline K_1$, we are reduced to the case where
$D_1(0)$ is contained in the meridian disc $\{\psi=\psi_0\}$. Decreasing the radius
of the tube $T$ if necessary, the tangent plane
$T_mD_1(0)$ is almost orthogonal to the pole axis directed by $\nu_\phi$
at each $m\in D_1(0)$. This implies that, for every $r\in (0,1)$, the disc $D_1(0)$ is transverse
to the $(i-1)$-sphere of radius $r$ in $\{\psi=\psi_0\}$.
The image $C_1(0)$ of $D_1(0)$ by $Desc$
is diffeomorphic to $S^{i-2}\times(0,1] $
and contained in the spherical annulus $\mathbb{A}_{\psi_0}:=\{(\phi, r, \psi_0)\mid \phi\in \Sigma^-, r\in (0,1]\}$. By tangency of $D_1(0)$ with the preferred hyperplane $\Delta^\phi$,
the
{\it end} of $C_1(0)$ when $r\to 0$ compactifies as the
$\phi$-equator $E^\phi\subset \Sigma^-$.
Moreover, $C_1(0)$ is transverse (inside $\mathbb{A}_{\psi_0}$) to the sphere $\Sigma^-\times\{(r,\psi_0)\}$ for every
$r\in(0,1)$, since the corresponding assertion holds in $\partial^+\mathcal{M}_p$.
Thus, there is an annulus $C_{\rm eq}\subset E^\phi\times[0,1]\times\{\psi_0\} $
such that $C_1(0)$ reads as the graph of some map $\bar\kappa: C_{\rm eq}\to \Sigma^-$ valued in the
complement of the poles. Then, $\bar\kappa$ is homotopic to the map $(\phi,r)\mapsto \phi$
from $C_{\rm eq}$ to the equator $E^\phi$ of $\Sigma^-$.
By isotopy extension preserving each sphere $\Sigma^-\times\{(r,\psi)\}$,
we have some diffeomorphism $\overline K_2$ of $\partial^-\mathcal{M}_p$ of the form
$(\phi,r, \psi)\mapsto (\kappa(\phi,r,\psi),r,\psi)$ which
pushes $C_1(0)$ to its {\it flat} position $C_{\rm eq}$ and satisfies $\kappa(\phi,0,\psi)=\phi$.
By applying Lemma \ref{C^1} ``up side down'', $K_2$ extends to $M$ preserving $\mathcal{M}_p$ with its
standard gradient. On the upper boundary of the Morse model,
this means that $\overline K_2$ pushes $D_1(0)$ to an $(i-1)$-disc in the hyperplane $\Delta^\phi$
by some diffeomorphism tangent to $Id$ in $a_+$. As for $\overline K_1$, this $\overline K_2$
may be chosen so that $\ell$ is fixed pointwise. The composed diffeomorphism
$\overline K_2\circ\overline K_1$ is as desired.\\
\noindent 2) We now prove the last two items. It consists just in an easy addition to what we have done above.
The diffeomorphism $\overline K_1$ keeps $\ell$ fixed pointwise. Let $\pi^{\phi_0}$ be the projection
of $\partial^-\mathcal{M}_p\cong \left(\Sigma^-\times \{\phi=\phi_0\}\right)$ onto its second factor. The tangent space in $a^-$
to $\Sigma^-$, $D'(0)$ and $J_0$ are independent and their span is egal to $T_{a^-}\partial^-M_p$. Then, after shrinking the
tube $T$ if necessary $\pi^{\phi_0}$ embeds the union $\left(\cup_sD'(s)\right)\cup J_0$ into $ \{\phi=\phi_0\}$;
notice that due to the non-vanishing of the velocity with respect to $s$ the union $\left(\cup_sD'(s)\right)$
is an $(n-i)$-disc transverse to the fibres of $\pi^{\phi_0}$.
As $a^-$ is far from the equator $E^\phi$, it is easy to find a common diffeomorphism $K_2$ of $\partial^-\mathcal{M}_p$
preserving the coordinates $(r,\psi)$ such that it maps $C_1(0)$ to the equatorial annulus $C_{\rm eq}$---the job
required for Item (1)---and
simultaneously $\left(\cup_sD'(s)\right)\cup J_0$ onto its $\pi^{\phi_0}$-image in the meridian $\{\phi=\phi_0\}$ by
diffeomorphism.
This complete the proof.
$\Box$\\
\begin{remarque} Proposition \ref{conjugation} holds true for a finite dimensional family. For instance, if we are given a two-dimensional germ $\left(X_{s,t}\right)_{s,t}$ adapted to the pair $(\mathcal{S}_g,\mathcal{S}_g^0)$
in $X_{0,0}$---in the sense of Definition \ref{decompositionS^0}---then each crossing path $\gamma_t:=(s\mapsto X_{s,t})$ of $\mathcal{S}_g$ has a normalization
by some $C^1$-diffeomorphism $\overline K_t$ depending continuously on $t$ in the $C^1$-topology.
\end{remarque}
\begin{notation}\label{cG} Let $\mathcal{G}^\pm$ be the groups of diffeomorphisms of $M$ isotopic to $Id_M$ which fixe the homoclinic orbit $\ell$ pointwise, preserve the closed one-form $\alpha$ and its standard gradient in $\mathcal{M}_p$, and have the following form: \begin{itemize} \item the restriction of every element in $ \mathcal{G}^+$ to $\partial^+\mathcal{M}_p$ read $\bparent{\phi,r,\psi)\mapsto (\phi, r, k^+(\phi,r, \psi)}$ with $k^+(\phi, 0,\psi)=\psi$; \item the restriction of every element in $ \mathcal{G}^-$ to $\partial^-\mathcal{M}_p$ reads $(\phi,r,\psi)\mapsto \bparent{k^-(\phi, r,\psi),r, \psi}$ with $k^-(\phi, 0,\psi)=\phi$. \end{itemize} \end{notation}
\begin{prop} \label{invariance} The action of the groups $\mathcal{G}^+$ and $\mathcal{G}^-$ on the space of adapted $\alpha$-gradients preserves the strata $\mathcal{S}_g$, $\mathcal{S}_g^0$ and $\mathcal{S}_g^{0,0}$.
\end{prop}
\nd {\bf Proof.\ } We do it for $\mathcal{G}^+$. Let $G\in \mathcal{G}^+$ and $X_0\in \mathcal{S}_g$. Since $G$ fixes the homoclinic orbit $\ell$ pointwise, the carried vector field $G_*(X_0)$ has the same homoclinic orbit. According to the form of the restriction of $G$ to the upper boundary of $\mathcal{M}_p$, the projection of $D_1(0)$ to the meridian disc is unchanged. Therefore, the $\phi$-equator is preserved. Looking in the lower boundary, one derives that the $\phi$-latitude of $a^-(X_0)= \Sigma^-\cap\ell$ is preserved.
Consider the disc $D'_1(0):= H^{-1}_{X_0}(\Sigma^+)$. Recall from Lemma \ref{C^1} that $G\vert_{\partial^-\mathcal{M}_p}$ is tangent to $Id$ at every point of $\Sigma^-$. Therefore, the tangent space $T_{a^-}D'_1(0)$ remains invariant by $G$. It follows that the $\psi$-equator is not changed, and hence,
the $\psi$-latitude of $a^+$ is preserved. Thus we have the invariance of the spherical
axes and of their intersection $\mathcal{S}_g^{0,0}$.
It remains to show that the character function is invariant. We already have seen the invariance of the
latitudes. The last term to control is the holonomic factor $\eta(X_0)$---resp.
$\eta(G_*(X_0))$---defined in (\ref{eq:HolonDec}). This factor remains unchanged by the action of $G$ thanks to the invariance of:
\begin{itemize} \item $\partial_v^1$ by invariance of the $\phi$-latitude (see Equation (\ref{eq:ChoiceVAxis}), \item $\partial_v^0$ since
$DG_{a^-}=Id$, \item the framing in which $\partial_v^0$ decomposes (this framing is preserved by invariance of the $\psi$-latitude). \end{itemize}
$\Box$\\
\section{ Change in the Morse-Novikov complex}\label{section3}
\subsection{\sc A groupoid approach.} \label{ssec:Groupoid}
A groupoid $\mathscr{G}$ is a {\it small category} where every arrow is invertible. The set of objects in $\mathscr{G}$ is noted $\mathscr{G}^0$
and the set of arrows (or morphisms) is noted $\mathscr{G}^1$. Given two objects $p,q\in \mathscr{G}^0$, the set of arrows
from $p$ to $q$ is noted $\Hom(p,q)$.
The \emph{identity} arrow at $p$ is noted $1_p\in\Hom(p,p)$. The map $p\mapsto 1_p$ embeds $\mathscr{G}^0$
into $\mathscr{G}^1$; and hence, $\mathscr{G}$ may be identified with its set of arrows endowed with its subset of identity arrows.
The maps \emph{source} and \emph{target}, $s,t:\mathscr{G}^1\to\mathscr{G}^0$ are defined by $s(g)= p$ and $t(g)= q$ for every morphism
$g\in \Hom(p,q)$.
\begin{remarque}\label{rem:GroupoidRing}
We denote by $\mathbb{Z}[[\mathscr{G}]]$ the set of formal series of the arrows of $\mathscr{G}$. An element $\lambda\in\mathbb{Z}[[\mathscr{G}]]$
is usually written as $\lambda=\sum_{g\in\mathscr{G}}n_g(\lambda)g$, where $ n_g(\lambda)\in\mathbb{Z}$. Define the \emph{support} of $\lambda$ as the set $\supp(\lambda):=\ens{g\in\mathscr{G}}{n_g(\lambda)\neq 0}$. Consider the set \begin{equation} \mathbb{Z}[\mathscr{G}]:=\ens{\lambda\in\mathbb{Z}[[\mathscr{G}]]}{\supp(\lambda)\text{ is finite}}. \end{equation} Given two arrows $g,h$---seen as elements of $\mathbb{Z}[\mathscr{G}]$--- the product $gh\in\mathbb{Z}[\mathscr{G}]$ is defined by their composition in $\mathscr{G}^1$ when $t(g)=s(h)$ and by $0$ otherwise. Extending the previous rule distributively with respect to the sum, we obtain a ring structure for $\mathbb{Z}[\mathscr{G}]$. Moreover, when $\mathscr{G}^0$ is finite, the element $1 := \sum_{p\in\mathscr{G}^0}1_p\in\mathbb{Z}[\mathscr{G}]$ gives an \emph{identity element} for this product. We call $\mathbb{Z}[\mathscr{G}]$ the \emph{groupoid ring} associated with $\mathscr{G}$. The next definition is classical. \end{remarque}
\begin{defn}\label{fund_gr} The fundamental groupoid $\Pi$ of the manifold $M$ is defined as follows: its objects are the points of $M$ and if $(p,q) $ is a pair of points $\Hom(p,q)$ is the set of homotopy classes of paths from $p$ to $q$. If $\gamma$ is a such a path, its homotopy class $[\gamma]$ will be called the $\Pi$-value of $\gamma$. \end{defn}
The closed 1-form $\alpha$ (whose cohomology class is noted $u$) defines a groupoid morphism \begin{equation}\label{u-alpha}
u_\alpha:\Pi\to\mathbb{R}, \quad\quad g\mapsto \int_{\gamma}\alpha, \end{equation}
where $g$ is the $\Pi$-value
of a path $\gamma$ in $M$ and $\mathbb{R}$
is seen as a groupoid with a single object $0$.
The restriction of any such $u_\alpha$ to the fundamental group $\pi_1(M,p)$ clearly coincides
with the group morphism associated with $u$. \\
We denote by $\Pi_\alpha$ the full subcategory of $\Pi$ whose set of objects
is the set $Z(\alpha)$, the zero set of $\alpha$. By Remark \ref{rem:GroupoidRing}, when $\alpha$ is Morse and $Z(\alpha)$ is non-empty, we may consider the groupoid ring $\mathbb{Z}[\Pi_\alpha]$.\\
A formal series $\lambda\in\mathbb{Z}[[\Pi_\alpha]]$ fulfills the \emph{Novikov Condition} if \begin{equation}\label{eq:NovikovCond} \text{for every } L\in\mathbb{R}, \text{ the set } \supp(\lambda)\cap u_\alpha^{-1}(L,+\infty) \text{ is finite}. \end{equation}
Denote by $\Lambda_u\subset \mathbb{Z}[[\Pi_\alpha]] $ the subset of formal series satisfying the Novikov Condition. It can be easily checked that the product rule given in Remark \ref{rem:GroupoidRing} also gives a ring structure to $\Lambda_u$, having the same identity element as $\mathbb{Z}[\Pi_\alpha]$. We call $\Lambda_u$ the \emph{Novikov ring associated with $\alpha$}.
\begin{ex} Let $p\in Z(\alpha)$ and $ g\in \pi_1(M, p)$ with $u(g)<0$ (for instance,
the $\Pi$-value of a homoclinic orbit of some $\alpha$-gradient). The following formal series are elements of the Novikov ring: \begin{equation} \sum_{j=1}^\infty g^j\,\quad\text{and}\quad \sum_{j=1}^\infty (-1)^j g^j\ \end{equation}
Indeed, the Novikov Condition \eqref{eq:NovikovCond} is fulfilled since $u_\alpha(g^j)=j.u_\alpha(g)$ which goes to $-\infty $ as $j\to +\infty$. Thus, these two series belong to the Novikov ring $\Lambda_u$.
In particular $1+g+g^2+\ldots$ is a
unit whose inverse is $1-g$. \\
\end{ex}
\subsection{\sc The Morse-Novikov complex.} Let $X$ be an $\alpha$-gradient which is assumed $KS$. An orientation is arbitrarily chosen on the unstable manifold $W^u(p,X)$ for each zero $p\in Z(\alpha)$. We are going to define a chain complex $\bparent{C_*(\alpha),\partial^X}$ of $\Lambda_u$-modules; it is graded by the integers $i\in\{0, 1, \ldots, n=\dim M\}$. This complex will be called the {\it Morse-Novikov} complex associated with the pair $(\alpha, X)$.
For each degree $i$, the module $C_i(\alpha)$ is the left $\Lambda_{\alpha}$-module freely generated by $Z_i(\alpha)$, the finite set of zeroes of $\alpha$ of Morse index $i$. The $\Lambda_u$-morphism $\partial_* ^X: C_*(\alpha)\to C_{*-1}(\alpha)$ must have the form following form on each generator of $C_i(\alpha)$:
\begin{equation}\label{eq:NovDiff} \partial^X_*(p)=\sum_{q\in Z_{*-1}(\alpha)}n(p,q)^X q, \end{equation} where the coefficient of $q$ has to be an element of $\Lambda_u$ (called the
\emph{incidence} of $p$ to $q$). This coefficient
$n(p,q)^X$ is the algebraic count
which we are going to define.
Let $\Orb^X(p,q)$ denote the set of connecting orbits of $X$
from $p$ to $q$.
First, we define the sign of a connecting orbit $\ell\in\Orb^X(p,q)$. Given a point $x\in\ell$, the sign $\varepsilon_\ell$ is defined by the following equation: \begin{equation} \varepsilon_\ell\, X(x)\wedge \text{co-or}\parent{W^s(q,X)}= \text{or} \parent{W^u(p,X)}. \end{equation} This definition is clearly indepedent of $x\in\ell$.} \begin{defn} Assume the $\alpha$-gradient $X$ is $KS$. The \emph{Morse-Novikov incidence} associated with the data $(p,q,X)$, $p\in Z_i(\alpha)$, $q\in Z_{i-1}(\alpha)$,
is defined by: \begin{equation} n(p,q)^X:=\sum_{\ell\in\Orb^X(p,q)}\varepsilon_\ell \,g_\ell \end{equation} where $g_{\ell}$ denotes the $\Pi$-value of the connecting orbit $\ell$. \end{defn}
By Proposition \ref{deep-appendix}, this coefficient fulfills the Novikov Condition (\ref{eq:NovikovCond}).
So, it is an element of $\Lambda_u$. Moreover, the map $\partial^X$ as in \eqref{eq:NovDiff} is indeed a differential; this can be found in \cite{latour}. The resulting $\bparent{C_*(\alpha),\partial^X}$ is known as the Morse-Novikov complex (see \cite{novikov}, \cite{sikorav}).\\
We denote by $\Orb^X_L(p,q)$ the set of connecting orbits from $p$ to $q$ whose $\alpha$-length $ \mathcal{L}(\ell)$ is less than a fixed $L>0$. Since these orbits verify the inequality $u_{\alpha}(g_\ell)>-L$, we are led to define a $L$-\emph{truncation} map $\mathcal{T}_L: \Lambda_u\to\mathbb{Z}[\Pi_\alpha]$ by: \begin{equation} \mathcal{T}_L(\lambda):=\sum_{u_{\alpha}(g)>-L}n_g(\lambda)g. \end{equation}
Two elements $\lambda,\mu\in\Lambda_{\alpha}$ are said to be equal modulo $L$ if $\mathcal{T}_L(\lambda-\mu)=0$.\\
Finally, the \emph{$L$-incidence} is defined as follows: \begin{equation} n(p,q)^X_L:=\sum_{\ell\in\Orb^X_L(p,q)}\varepsilon_\ell \,g_\ell\in\mathbb{Z}[\Pi_\alpha]. \end{equation} Of course we have $\mathcal{T}_L\bparent{n(p,q)^X}=n(p,q)^X_L$.
\subsection{\sc Effect of homoclinic bifurcation on the incidence.}
Consider a generic one-parameter family of addapted $\alpha$-gradients $(X_s)_s$
such that $ X_0\in\mathcal{S}_g$. By definition, $X_0$ has a unique homoclinic orbit $\ell$ connecting $p\in Z(\alpha)$ to itself whose $\Pi$-value is $g$. Denote the index of $p$ by $i$.
The next definition specifies some genericity conditions that will be needed to prove the theorem below. The rest of this section is devoted to its proof and consequences.
\begin{defn} \label{almost}Let $L>0$.
\noindent{\rm 1)} The $\alpha$-gradient $X$ is said to be \emph{Kupka-Smale up to} $L$ if, for every pair of zeroes $p,q\in Z(\alpha)$ and every $X$-orbit $\ell$ from $p$ to $q$ with $-\int_\ell \alpha<L$, the unstable and stable manifolds, $W^u(p,X)$ and $W^s(q,X)$, are transverse along $\ell$. The subset of $\mathcal{F}_\alpha$ formed with such $\alpha$-gradients is noted $\mathcal K S_L$.
\noindent{\rm 2)} An $\alpha$-gradient $X\in \mathcal{S}_g$
is said to be \emph{almost Kupka-Smale up to }$L$, if the preceding transversality condition is fulfilled except for the unique homoclinic orbit whose $\Pi$-value is $g$. The subset of $\mathcal{S}_g$ formed with such elements is noted $\mathcal{S}_{g,L}$.
\noindent{\rm 3)} The $\alpha$-gradient $X\in \mathcal{S}_g$
is said to be \emph{almost Kupka-Smale} if it is KS up to L for every $L>0$. These gradients are the elements of $\mathcal{S}_{g,\infty}: = \bigcap_L\mathcal{S}_{g,L}$.
\end{defn}
\begin{prop}${}$\label{g-residual}
\noindent{\rm 1)} The subspace $\mathcal{S}_{g,L}$ is open and dense in $\mathcal{S}_g$. Moreover, there exists an open set $\mathcal W_L$ in $\mathcal{F}_\alpha$ such that $\mathcal W_L\smallsetminus \mathcal{S}_g$ is contained in $\mathcal K S_L$.
\noindent{\rm 2)} The subspace $\mathcal{S}_{g,\infty}$ is residual in $\mathcal{S}_g$. \end{prop}
\nd {\bf Proof.\ } 1) One checks that the constraint to have a unique homoclinic orbit with a given $\Pi$-value does not prevent us from arguing as Peixoto \cite{peixoto}.
\noindent 2) This item is a little more subtle since $\mathcal{S}_g$ is not a complete metric space. But it is separable. Then, it is sufficient to prove that, for any closed ball $B$ in $\mathcal{S}_g$, the intersection $\mathcal{S}_{g,\infty}\cap B$ is residual in $\mathcal{S}_g\cap B$. And now, we are allowed to follow Peixoto word for word. More details are left to the reader.
$\Box$\\
\begin{notation}{\rm It is easily seen that $\mathcal{S}_{g,L}$ is open in $\mathcal{S}_g$; and, if $X\in \mathcal{S}_{g,L}$
there is an arbitrarily small neighborhood $U$ of $X$ in $\mathcal{F}_\alpha$ such that $U\smallsetminus \mathcal{S}_g$ is made of two connected components in $\mathcal K S_L$. In particular, if $(X_s)_s$ is a path which intersects $\mathcal{S}_g$ transversely at $ X_0\in \mathcal{S}_{g,L}$ the gradient $X_s$
is $KS$ up to $L$ for every $s\neq 0$ close enough to 0.
Therefore, for every $q\in Z_{i-1}(\alpha)$
the $L$-incidence $n(p,q)^{X_s}_L$ is well defined and independent of $s$ when $s\in \mathcal{O}_+$ (resp. $s\in \mathcal{O}_-$); it is denoted by $n(p,q)^\pm_L$ respectively. Here, the symbol $\mathcal{O}_-$ stands for an open interval $(-\epsilon,0)$ whose size is not specified and which is as small as needed by the statement; and similarly for $\mathcal O_+$. } \end{notation}
\begin{thm}\label{thm:selfslideAlg} Let $(X_s)_{s}$ be a path of adapted $\alpha$-gradients which intersects
$\mathcal{S}_g$ transversely at $ X_0$ and let $L> - u_\alpha(g)$. Assume $X_0$ is almost Kupka-Smale up to $L$, that is $X_0\in \mathcal{S}_{g,L}$. Then we have the following.
\noindent When $(X_s)_s$ intersects the stratum $\mathcal{S}_g$ positively,
the next relations hold in $\Lambda_u$: \begin{enumerate} \item if $ X_0\in\mathcal{S}_g^+$, then $n(p,q)^+_L=\bparent{1+g+g^2+g^3+\ldots} \cdot n(p,q)^-_L$\quad{\rm (mod. $L$)}, \item if $X_0\in\mathcal{S}_g^-$, then $n(p,q)^+_L=\bparent{1+g}\cdot n(p,q)^-_L$\quad{\rm (mod. $L$)}. \end{enumerate}
\noindent When $(X_s)_s$ intersects the stratum $\mathcal{S}_g$ negatively, we have: \begin{enumerate} \item[(1')] if $ X_0\in\mathcal{S}_g^+$, then $n(p,q)^+_L=\bparent{1-g}\cdot n(p,q)^-_L$\quad{\rm (mod. $L$)}, \item[(2')] if $ X_0\in\mathcal{S}_g^-$, then $n(p,q)^+_L=\bparent{1-g+g^2-g^3+\ldots}\cdot n(p,q)^-_L$\quad{\rm (mod. $L$)}. \end{enumerate}
\end{thm}
It is worth noticing the reason for the truncation: in general, the bifurcation at $s=0$ is not isolated among the bifurcation times of the path $(X_s)_s$. When it is isolated, the truncation is not needed any more; this will be the case in \cite{l-m}.\\
\noindent{\bf Proof of $(1)\iff(1')\iff (2)\iff (2')$.} The first equivalence is obvious since $1- g$ is the inverse of $ 1+\Sigma_{j=1}^\infty g^j$; and similarly for the last equivalence.
Let us show the middle equivalence. It is obtained by changing the vector $\partial_v$ into its opposite in the coordinates of the tube around the homoclinic orbit of $X_0$. This amounts to put a sign in Formula (\ref{orientation1}). The latter change has three effects: \begin{enumerate} \item[i)] It reverses the co-orientation of $\mathcal{S}_g$. Hence, positive and negative crossings are exchanged. \item[ii)] The character is changed into its opposite since the $\phi$- and $\psi$-latitudes are.
Thus, both sides of $\mathcal{S}^0_g$ are exchanged. \item[iii)] The homoclinic orbit becomes negative in the following sense: if the $\phi$-sphere is seen as the boundary of the meridian disc at $a^+$, the new positive hemisphere projects to the preferred hyperplane $\Delta^\phi$ (whose orientation is unchanged) by reversing the orientation. This implies that in the algebraic count of connecting orbits from $p$ to $q$ (where $i(p)=i(q)+1$) the coefficient $g$ has into be changed to $-g$ (see the orientation claim in Lemma \ref{lem:C1(s)}). \end{enumerate} We are left to prove the Theorem \ref{thm:selfslideAlg} in case (1). This will be done in Subsection \ref{continued}.
$\Box$\\
According to Proposition \ref{invariance}, the statement of Theorem \ref{thm:selfslideAlg}
is invariant by the groups $\mathcal{G}^\pm$ introduced in Notation \ref{cG}. After Proposition \ref{conjugation}, it is sufficient to consider the case where the crossing path in question is {\it normalized} in the sense of Definition \ref{normalization}. This assumption is done in what follows. We need some more notations and lemmas. The setting of Theorem \ref{thm:selfslideAlg} is still assumed.
\begin{notation}{\rm ${}$\label{not:C1(s)}
\noindent 1) Recall from Subsection \ref{perturbed} that, $H_s$ denotes the perturbed holonomy diffeomorphism
along the homoclinic orbit $\ell$. For $s\in \mathcal{O} p(0)$, it maps $\mathcal Op(\{z=0\}) \subset \partial^- \mathcal{M}_p$ to an
open set of $\partial^+ \mathcal{M}_p$ containing $\{z=1\}$.
\noindent 2) For $s\in \mathcal Op(0)$, let $D_1(s)$ denote the image $H_s(\Sigma^-)\cap\{z=1\}$. Consider its projection $\pi^{\psi_0}(D_1(s))$ onto the meridian disc $\{\psi=\psi_0\}$ and define \begin{equation}\label{a+(s)} a^+(s):= \pi^{\psi_0}(D_1(s))\cap\mathbb{R}\partial_v^1. \end{equation} The {\it crossing velocity} of the crossing path is \begin{equation}\label{initial_velocity} V_1 := \frac {d }{ds} a^+(s)_{\vert s=0}. \end{equation} After reparametrization, we may assume $V_1=1$.
\noindent 3) For $s\in \mathcal O_\pm$, by definition of a crossing path $D_1(s)$ avoids $\Sigma^+$. Therefore we are allowed to define $C_1(s):= Desc(D_1(s))\subset \partial^-\mathcal{M}_p$. It is still an $(i-1)$-disc. } \end{notation}
\begin{lemme}\label{lem:C1(s)} Recall the natural projection $\pi^-: \partial^- \mathcal{M}_p\to \Sigma^-$. Let $K$ be any compact disc in the open hemisphere $\mathcal H^-(\Sigma^-)$ {\rm (}as in {\rm Subsection \ref{ssec:Equators})}. Then, for $s\in \mathcal O_-$, the disc $C_1(s)\cap (\pi^-)^{-1}(K)$ is a graph over $K$ of a section of $\pi^-$. This section goes to the zero-section $0_K$ of $\pi^- $ over $K$ in the $C^1$-topology as
$s$ goes to $0$. Moreover, $\pi^-: C_1(s)\to \Sigma^-$ is orientation reversing.
A similar statement holds when $K\subset \mathcal H^+(\Sigma^-)$ and $s\in\mathcal O_+$, except that $\pi^-: C_1(s)\to \Sigma^-$ is orientation preserving in that case. \end{lemme}
\nd {\bf Proof.\ } The statement about orientation is clear after the claim about the $C^1$-convergence. Consider the case $s\in\mathcal O_-$, the other case being similar.
Recall the normalization assumption: the disc $D_1(0)$ is contained in the meridian disc $\{\psi=\psi_0\}$.
Recall the projection $\pi^{\psi_0}$ of $\partial^+\mathcal{M}_p$ to the meridian disc. The normalization implies that the projected discs $\pi^{\psi_0}(D_1(s))$ tend to $D_1(0)$ in the $C^1$-topology.
Recall the identification $\partial(\{\psi=\psi_0\})\cong \Sigma^-$ of \eqref{eq:IdentPhi}
and think of $K$ as a compact subset of the South hemisphere in the boundary of the meridian disc
$\{\psi=\psi_0\}$.
For every such $K$,
the following property holds:
\begin{itemize}
\item[] {\it For every $s$ close enough to 0 and for every $\phi\in K$,
the disc $\pi^{\psi_0}(D_1(s))$
intersects only in one point and transversely the ray directed by $\phi$ in $\{\psi=\psi_0\}$. }
\end{itemize}
This point is denoted by $m_s(\phi)$; it is the image of some
$\tilde m_s(\phi)\in D_1(s)$ through $\pi^{\psi_0}$.
We have $m_0(\phi)=\tilde m_0(\phi)= a^+$, but when $s\neq 0$, the point
$\tilde m_s(\phi)$ has well-defined multi-spherical coordinates $(\phi, r_s(\phi),\psi_s(\phi))$ where
$r_s$ and $\psi_s$ depend smoothly on $s$.
Going back to $\partial^-\mathcal{M}_p$ by the map $Desc$, we see that $C_1(s)$ is the image of a section
of the projection $\pi^-$ over $K$.
When $s\in \mathcal{O}_-$ goes to 0, then $D_1(s)\cap \{(\phi,r,\psi)\in \partial^+\mathcal{M}_p
\mid \phi\in K\} $
goes to $a^+$ in the metric sense. In particular, $\max_{\phi\in K} \{r\,\vert\, (\phi,r,\psi)\in D_1(s)\}$
goes to 0. Therefore, $C_1(s)\cap (\pi^-)^{-1}(K)$ goes to $0_K$ in the $C^0$-topology when $s$ goes to $0$ negatively.
For the $C^1$-convergence, we use that $K$ is far from the $\phi$-equator of $\Sigma^-$. Therefore, the angle in the meridian disc $\{\psi=\psi_0\}$ between the ray directed by $\phi$ and the tangent plane to $ \pi^{\psi_0}(D_1(s))$ at $ m_s(\phi)$ is bounded from below. Including the fact that $s\to 0_-$ implies $r\to 0$, it follows that the smooth functions $r_s(\phi)$ and $\psi_s(\phi)$ satisfy
$$\left\{\begin{array}{c} \vert dr\vert= O_s(r) \vert d\phi\vert,\\ \vert d\psi\vert =O_s(r) \vert d\phi\vert, \end{array} \right. $$
where $O_s(r)$ stands for a quantity which is uniformly bounded by a positive multiple of $r$ when $s$ goes to 0. This yields the claimed $C^1$-convergence of the part of $C_1(s)$ over $K\subset \mathcal H^-(\Sigma^-)$ to $0_K = K$.
$\Box$\\
\subsection{\sc Geometric interpretation of the character function.}\label{interpretation} We still consider a germ of {\it normalized} positive crossing path $\left(X_s\right)_s$. Let $D_1'(0) $ be the connected component $W^s(p, X_0)\cap \{z=0\}$ which contains $a^-$. This is an $(n-i-1)$-disc which is the image of $\Sigma^+$ by the inverse holonomy diffeomorphism $H_0^{-1}$ along $\ell$. For every $s\in \mathcal Op(0)$, consider now $D_1'(s):= H_s^{-1}(\Sigma^+)\cap \{ z=0\}$.
Recall from Subsection \ref{ssec:tube}
that $\Sigma^-\cap \{z=0\}$ is identified with the $x$-axis whereas $D_1'(0)$ is identified with the $y$-axis.
Let also $p_v:\sing{z=0}\to\sing{ v=0, z=0}$ denote the projection parallel to $\partial_v$ onto the $(x,y)$-space. When $s\in \mathcal Op(0)$ goes to 0, the family $D'_1(s)$ accumulates to the $y$-axis in the $C^1 $-topology.
Under the condition $\omega_\phi(X_0)\neq 0$, that is $\cos_\phi(a^-)\neq 0$,
Lemma \ref{lem:C1(s)} tell us that the family $C_1(s)\cap \{z=0\} $ accumulates to the $x$-axis in the $C^1$-topology if and only if $s\, \cos_\phi(a^-)$ goes to $0_+$. In particular, when $s\, \cos_\phi(a^-)>0$ the projections $p_v(C_1(s))$ and $p_v(D'_1(s))$ intersect
in a unique point $b_1(s)$ and transversely.
If $s\, \cos_\phi(a^-)$ is negative, then $C_1(s)\cap \{z=0\} $ is empty.\\
Denote by $c_1(s)\text{ and } d_1'(s)$ the only points in $C_1(s)\cap\sing{z=0}$ and in $D_1'(s)$ respectively such that $p_v(c_1(s))=b_1(s)=p_v(d_1'(s))$. Consider the real number \begin{equation}\label{eq:v1(s)} v_1(s):=v(c_1(s))-v(d_1'(s))\quad \text{ for every } s \text{ such that }s\, \cos_\phi(a^-)\in \mathcal O_+. \end{equation} This function $v_1(s)$ depends smoothly on $s$. Its derivative with respect to $s$ is denoted by $\dot{v}_1(s)$.
\begin{remarque}\label{rem:g2} By construction, $v_1(s)=0$ implies $c_1(s)=d_1'(s)$ which in turn implies
the existence of an orbit $\ell_s\in\Orb^{X_s}(p, p)$ passing through $c_1(s)$ such that $[\ell_s]=g^2$. This remark
will be used when analysing the doubling phenomenon in Section \ref{section4}. \end{remarque}
Lemma \ref{geometry-character} will show
the kinematic
meaning of the character function $\chi$ at $X_0$.
\begin{lemme}\label{geometry-character} Let $(X_s)_s$ be a normalized positive crossing path of $\mathcal{S}_g$ whose crossing velocity
\eqref{initial_velocity} is equal to $ +1$. If $\cos_\phi(a^-)\neq 0$ then
the following relation holds: \begin{equation}\label{eq:CharGeom}
\cos_\phi(a^-)\, \dot{v}_1(0)=\chi( X_0). \end{equation}
\end{lemme}
\nd {\bf Proof.\ } Let us study the $v$-coordinate of $c_1(s)$ first. We notice that, if $\bar c_1(s)$ is another point of $C_1(s)$ depending smoothly on $s$ and such that $\bar c_1(0)=a^-=(\phi_0,0,-)$,
we have the same velocity in $s= 0$:
\begin{equation}\label{same-speed} \frac{d}{ds} v\left(\bar c_1(s)\right)_{\vert s=0} =\frac{d}{ds} v\left(c_1(s)\right)_{\vert s=0}. \end{equation} Indeed, $C_1(s)$ accumulates $C^1$ to $\Sigma^-\cap\{z=0\}$ (Lemma \ref{lem:C1(s)}), then the difference $\dot c_1(0)-\dot{\bar c}_1(0)$ is a vector in $T_{a^-}\Sigma^-$. We apply this remark to the point $\bar c_1(s):=C_1(s)\cap\{\phi=\phi_0\}$. Let $d_1(s)\in D_1(s)$ the lift of $\bar c_1(s)$ by $Desc^{-1}$.
Since $Desc$ preserves the $(\phi, r, \psi)$-coordinates, both paths $s\mapsto \bar c_1(s)$ and $s\mapsto d_1(s)$ have the same coordinates
$(\phi(s)=\phi_0, r(s), \psi(s))$ when $s\neq 0$. As $\bar c_1(s)\in\{\phi=\phi_0 \}$, the vector $\dot d_1(0)$ belongs to the $(n-i)$-plane
which is the span of $\{\mathbb{R}\,\phi_0, T_{a^+}\Sigma^+\}$. Let ${\hat d}_1(0)$ be its projection to the line $\mathbb{R}\,{\phi_0}$ in the meridian disc $\{\psi=\psi_0\}$. Then, \begin{equation}\label{eq:Info1} \begin{array}{l}
{\hat d}_1(0)=\rho\, \phi_0
\text{ where } \rho={\displaystyle\frac{d}{ds}r(s)_{\vert s=0} }.\\ \end{array}
\end{equation} By definition of the $\phi$-latitude (Proposition \ref{prop:Latitudes}) we have: \begin{equation}
\langle \nu_\phi,{\hat d}_1(0)\rangle=\rho\langle \nu_\phi,\phi_0\rangle= \rho\, \cos_\phi (a^-).
\end{equation}
By definition, the hyperplane $\Delta^\phi$ is tangent in $a^+$ to $D_1(0)$.
Therefore, some calculus of Taylor expansion tells that
\begin{equation}
\langle \nu_\phi,{\hat d}_1(0)\rangle =\frac{d}{ds} a^+(s)_{\vert s=0}=+1\,.
\end{equation}
We derive:
\begin{equation}\label{eq:rho} \rho=\frac{1}{ \cos_\phi(a^-)}\ . \end{equation}
Since $d_1(s) $ goes to $a^+=(-,0,\psi_0)$ as $s$ goes to $0$ and since the radial velocity is preserved by $Desc$, then we have: \begin{equation} \dot{\bar c}_1(0)= \rho\,\psi_0\in T_{a^-}\{\phi=\phi_0\}. \end{equation}
Using again \eqref{eq:Trigo}, but relatively to the preferred hyperplane $\Delta^{\psi}$ which defines the $\psi$-latitude
we obtain \begin{equation}
\langle \nu_{\psi}, \dot{\bar c}_1(0)\rangle=\rho\,\cos_{\psi}(a^+)\,\nu_\psi. \end{equation} This together with the decomposition of $T_{a^-}\{z=0\}$ of \eqref{eq:decompTgt} says that there are two vectors $w_x\in T_{a^-}\Sigma^-$ and $ w_y\in \Delta^\psi$ such that \begin{equation}\label{eq:c1'(0)} \dot{\bar c}_1(0)=w_x+ \rho\,\omega_{\psi}(a^+)\nu_{\psi} + w_y. \end{equation}
By\eqref{eq:c1'(0)}, we have $v(\dot{\bar c}_1(0))\, \partial_v^0
=\rho\,\omega_{\psi}(a^+)\langle \partial_v^0, \nu_{\psi}\rangle$. On the other hand, \eqref{eq:HolonDec} tells us that: \begin{equation}\label{eq:HolomMod} v(\nu_{\psi})=\frac{1}{\bar{\eta}}=\eta\,. \end{equation} Putting together (\ref{same-speed}), \eqref{eq:rho}, \eqref{eq:c1'(0)} and \eqref{eq:HolomMod} we obtain: \begin{equation}\label{eq:c1der} v(\dot{c}_1(0))= v(\dot{\bar c}_1(0))=\rho\,\omega_{\psi}(a^+)\, v(\nu_\psi)=\frac{\cos_{\psi}(a^+)}{\cos_{\phi}(a^-)}\,\eta\,. \end{equation}\\
We come now to estimate the term $v(\dot{d'}_1(0))$. We apply Lemma \ref{velocity} for comparing velocities associated with the holonomy $H_s$ and its inverse. From Formula (\ref{initial_velocity})
we derive that
\break $\frac {d}{ds} (v\circ H_s)(a^-)_{\vert s=0}= +1$. Then, the inverse holonomy satisfies \begin{equation}\label{second} \frac {d}{ds} (v\circ H_s^{-1})(a^+)_{\vert s=0}= -1 \end{equation} from which it is easily derived that $v(\dot{d'}_1(0))=-1$. Therefore: \begin{equation}\label{eq:GrandV2} \dot v_1(0)=\eta\frac{\cos_{\psi}(a^+)}{\cos_{\phi}(a^-)}+1 \end{equation} which is a reformulation of the desired formula.
$\Box$\\
Lemma \ref{lem:Passages} right below is the last tool that we need for proving Theorem \ref{thm:selfslideAlg}. It extracts the geometric information contained in Equation (\ref{eq:CharGeom}). The setting is the same as in the previous lemma. We are only looking at normalized paths $(X_s)_s$ which cross $\mathcal{S}_g$ positively at a point $X_0\in S_g^+$.
\begin{lemme}\label{lem:Passages}${}$
\noindent {\rm 1)} Suppose the $\phi$-latitude $ \cos_\phi(a^-)$ is positive (and hence $\dot{v}_1(0)>0$). Then, for $s\in \mathcal O_+$ there are sequences of non-empty $(i-1)$-discs $(D_k(s))_{k>1}$ and $(C_k(s))_{k>1}$ inductively defined from the previous $D_1(s)$ and $C_1(s)$ by \begin{equation}\label{defC_k} \left\{ \begin{array}{l} D_k(s):= H_s\left(C_{k-1}(s)\right) \cap \{z=1\}\\ C_k(s) := Desc\left( D_k(s)\right) \subset \partial^- \mathcal{M}_p \end{array} \right. \end{equation}
Moreover, as $s$ goes to 0, the disc $C_k(s)$ tends to the North hemisphere
$\mathcal H^+(\Sigma^-)$
in the $C^1$-topology, uniformly over every compact set of $\mathcal H^+(\Sigma^-)$.
When $s\in \mathcal O_-$, both previous sequences are empty when $k>1$.
\noindent {\rm 2)} If $\dot{v}_1(0)$ and $ \cos_\phi(a^-)$ are negative,
then for $s\in \mathcal O_-$ the disc $C_2(s)$ is well defined as in \eqref{defC_k}
and the subsequent discs, $D_3(s), ...$, are empty. Moreover,
$C_2(s)$ tends to $\mathcal H^+(\Sigma^-)$ in the $C^1$-topology with the reversed orientation.
When $s\in \mathcal O_+$, all discs in \eqref{defC_k} are empty when $k>1$. \end{lemme}
\nd {\bf Proof.\ } 1) When $s\in \mathcal O_-$, the disc $C_1(s)$ does not meet the tube $T$ around the homoclinic orbit $\ell$. Then $D_2(s)$ is empty and hence, all further discs are so.
Assume now that $s\in \mathcal O_+$. In that case, $C_1(s) $ goes to $\mathcal H^+(\Sigma^-)$ (Lemma \ref{lem:C1(s)}) and therefore meets the set $\{z=0\}$. Then, the discs $D_2(s)$ and $C_2(s) $ defined in \eqref{defC_k} are non-empty. We are going to compute the position of $C_2(s)$ with respect to $D'_1(s)$ measured by some $v_2(s)$ in the direction of the $v$-coordinate.
We shall check the positivity of $\dot v_2(0)$ which will allow us to pursue the induction.
Recall the projection $\pi^{\psi_0}:\partial^+\mathcal{M}_p\to \{\psi=\psi_0\}$ and define the spherical annulus $A:= (\pi^{\psi_0})^{-1}(\mathbb{R}\partial_v^1)$. Consider the point $\tilde c_1(s)$ which is the transverse intersection
$C_1(s)\cap H_s^{-1}(A)$. By projecting to the $v$-axis we find a function $v(\tilde c_1(s))$ which satisfies \begin{equation} \frac{d}{ds}v(\tilde c_1(s))_{\vert s=0}= \frac{d}{ds}v( c_1(s))_{\vert s=0} \end{equation} (Indeed, if $s$ goes to $0_+$, $\lim \tilde c_1(s)= \lim c_1(s)= a^-$). Recall the definition of $d'_1(s)\in D_1'(s)$ from (\ref{eq:v1(s)}). Compute the derivative $V_2$ at $s=0$ of $v\left[H_s(\tilde c_1(s))\right]-v\left[H_s(d'_1(s))\right]$, which is nothing but the velocity of the projection of $H_s(\tilde c_1(s))\in D_2(s)$ onto the $v$-axis of $\{z=1\}$ at $s=0$. Using $\tilde c_1(0)= d'_1(0)=a^-$ and $dH_0(a^-)=Id$ in the coordinates of the tube $T$, we find: \begin{equation}\label{V_2} V_2= \dot v_1(0) \end{equation} which is positive by assumption. This $V_2$ will play the same r\^ole as the crossing velocity.
Since $V_2>0$, Lemma \ref{lem:C1(s)} tells us that $C_2(s)$ meets $\{z=0\}$ when $s\in \mathcal O_+$. Therefore, we choose points $c_2(s)\in C_2(s)=Desc(D_2(s)) $ and
$d'_2(s)\in D'_1(s)$ which forms the unique pair of points of the respective subsets
which have the same $p_v$-projection. We define
\begin{equation}
v_2(s):= v(c_2(s))-v(d'_2(s))
\end{equation}
The computation of $\dot v_2(0)$ is exactly the same except we have to replace
$V_1=1$ with $V_2$. The result is:
\begin{equation}
\dot v_2(0)= \eta\frac{ \cos_\psi(a^+)}{ \cos_\phi(a^-)} V_2+1.
\end{equation}
Here,
some discussion is needed according to the sign of $ \cos_\psi(a^+)$:
\begin{itemize}
\item[(i)] if $ \cos_\psi(a^+)$ is positive, then $\dot v_2(0)$ is larger than $ V_1=+1$. In that case
the induction goes on with $V_k>V_{k-1}>\ldots> 1$.
\item[(ii)] if $ \cos_\psi(a^+)$ is negative, then $0<V_2=\dot v_1(0)<1$, where the last inequality comes from \eqref{eq:GrandV2}. Therefore, $\dot v_2(0)-1$ is the product of two numbers\footnote{One of them being $V_2$.} of opposite signs and whose absolute values are smaller than 1. Thus, $\dot v_2(0) $ belongs to $(0,1)$. Such a fact is preserved at each step of the induction.
\end{itemize}
The induction can be carried on.
\noindent 2) Take $s\in \mathcal O_-$. The calculation yielding the equality (\ref{V_2}) still holds and tells us that
$V_2$ is negative. Remark that $H_s(d'_1(s))\in \Sigma^+$. As $s<0$, one derives:
$$
v\left[H_s(\tilde c_1(s))\right]=v\left[H_s(\tilde c_1(s))\right]-v\left[H_s(d'_1(s))\right]>0.
$$ Thus, Lemma \ref{lem:C1(s)} says that $C_2(s)$ tends to $\mathcal H^+(\Sigma^+)$
in the $C^1$-topology. As $ \cos_\phi(a^-)<0$, $C_2(s)$ does not meet $\{z=0\}$ and the next discs are empty.
Concerning the orientation, we check that $D_2(s)$ tends to $-D_1(0)$ in $\partial ^+\mathcal{M}_p$. Then,
$C_2(s) $ tends to $-\mathcal H^+(\Sigma^-)$. Finally,
the statement when $s\in \mathcal O_+$ is clear.
$\Box$\\
\begin{remarque}\label{rem-nonvanishing} {\rm In the previous analysis, from Notation \ref{not:C1(s)} to Lemma \ref{lem:Passages}, we have given the lead role to the bottom of the Morse model, the attaching sphere $\Sigma^-$, the perturbed holonomy and the map $Desc$. Here, the non-vanishing of the $\phi$-latitude is required.
One can make a similar analysis with the top of the Morse model, the co-sphere $\Sigma^+$, the inverse of perturbed holonomy and $Desc^{-1}$. There, the non-vanishing of the $\psi$-latitude is needed. But the the statements of the lemmas are analogous. As a consequence, if the proof of Theorem \ref{thm:selfslideAlg} can be completed under the assumption $\omega^\phi(X_0)\neq 0$, then it can also be completed when $\omega^\psi(X_0)\neq 0$. } \end{remarque}
\subsection{\sc Proof of Theorem \ref{thm:selfslideAlg} continued.\label{continued}} We continue the proof which begins right after
the statement of this theorem. After a series of equivalences, we are left to prove the case (1) of a positive crossing of the stratum $\mathcal{S}_g$ at a point $X_0\in \mathcal{S}_{g,L}$ where the character functon is positive.
We recall that
the statement of Theorem \ref{thm:selfslideAlg} is preserved under the action of the groups
$\mathcal{G}^\pm$ (see Notation \ref{cG}). Therefore, we may assume that $X_0\in \mathcal{S}_{g,L}$ is normalized.
Moreover, as $\chi(X_0)\neq 0$, one of the extended $\phi$-latitude and $\psi$-latitude is non-zero.
By Remark \ref{rem-nonvanishing}, it is sufficient to complete the proof when $\omega^\phi(X_0)\neq 0$.
The element $g\in \Pi_\alpha$ is thought of as an arrow from the set of zeroes $Z(\alpha)$ into itself. Then $g$ determines its origin $p$ which is also its end point. Recall that the Morse index of $p$ is $i$. We look at any zero $q\in Z(\alpha)$ of Morse index $i-1$. We have to compute the change of $n(p,q)^X_L$ when $X$ changes from $X_{0_-}$ to $X_{0_+}$ in the given crossing path $\left(X_s\right)_s$.
It is useful to make some partition, adapted to $g$, of the set of connecting orbits from $p$ to $q$ for the gradient $X_{0_-}$.\\
\noindent {\sc Partition of the connecting orbits.} We may assume that each connecting orbit of
$X_{0_-}$ from $p$ to $q$ is the unique one in its homotopy class. In general, one would take the multiplicity into
account. Recall $\Gamma_p^q$, the set of homotopy classes of paths from $p$ to $q$. The equivalence relation
defining the partition of $\Gamma_p^q$ is the following: $[\gamma_0]\sim [\gamma_1]$
if and only if the homotopy class of $\gamma_1$ reads
$[\gamma_1]= g^k\cdot[\gamma_0] \text{ with } k\in \mathbb{Z}$.
Consider $[\gamma]_{\sim}$, the $\sim$-class of a fixed connecting orbit $\gamma$.
Since the $\alpha$-lengths of connecting orbits are positive, we have $u_\alpha([\gamma'])<0$ for every $\gamma'\in [\gamma]_{\sim}$. Therefore, as $u(g)<0$ there are only finitely many connecting orbits $\gamma'\in [\gamma]_{\sim}$ verifying $u_\alpha([\gamma'])\geq u_\alpha([\gamma])$. Let $\gamma_0$ be a connecting orbit in $[\gamma]_{\sim}$ such that $u_\alpha([\gamma_0])$
is maximal. Then, any
element of $[\gamma]_{\sim}$
reads $g^k\cdot[\gamma_0]$ for some
$k\geq 0$.\\
\noindent{\sc End of the proof.} By $\Lambda_u$-linearity of the Morse-Novikov differential,
without loss of generality we may assume that the above
partition has only one $\sim$-class and that the maximal element $\gamma$ is a positive connecting orbit
(with respect to the chosen orientations).
Let $b:= \gamma\cap \Sigma^-$ and $\Delta_s$, $s=0_-$, be the connected component of $W^s(q,X_s)\cap\partial^-\mathcal{M}_p$ containing $b$.
After shrinking the parameter $\delta$ of $\mathcal{M}_p$ if necessary (see Subsection \ref{ssec:morse}), $\Delta_s$
is an $(n-i)$-disc which intersects $\Sigma^-$ transversely and only at $b$.
We are looking for the change formula
up to $\alpha$-lenght $L$ (for every $L>-u(g)$). Let $\left(X_s\right)_{s\in \mathcal{O} p(0)}$ be a crossing path with $X(0)$ in
$\mathcal{S}_g^+$. We do it first in the case where the $\phi$-latitude $\omega(X(0))\neq 0$.
There are still four cases to consider
where $a^-$ stands for $a^-(X_0)$ and $\mathcal H^\pm$ stand for $\mathcal H^\pm(\Sigma^-)$:
\begin{enumerate}
\item[(a.1)] The $\phi$-latitude $ \cos_\phi(a^-)$ is positive and $b$ belongs to $\mathcal H^+$.
\item[(a.2)] The $\phi$-latitude $ \cos_\phi(a^-)$ is positive and $b$ belongs to $\mathcal H^-$.
\item[(b.1)] The $\phi$-latitude $ \cos_\phi(a^-)$ is negative and $b$ belongs to
$\mathcal H^+$.
\item[(b.2)] The $\phi$-latitude $ \cos_\phi(a^-)$ is negative and $b$ belongs to $\mathcal H^-$.
\end{enumerate}
The proof consists of applying Lemma \ref{lem:Passages}. It is convenient to use the following
definiton. \\
\noindent{\bf Definitions.}${}$
\noindent 1) {\it The positive (resp. negative) part of $W^u(p, X_0)$ is the union of all $X_0$-orbits passing through the positive (resp. negative) hemisphere $\mathcal H^+(\Sigma^-)$ (resp. $\mathcal H^-(\Sigma^-)$). It will be denoted by $W^u(p, X_0)^\pm$.}
\noindent 2) {\it For a given $k>0$, we say that the unstable manifolds $W^u(p, X_s)$
accumulate to $g^k\cdot W^u(p, X_0)^\pm$ when $s$ goes to $0_-$ (resp. $0_+$) if it is true when
lifting to the universal cover, that is:
if $\tilde p$ (resp. $\widetilde X_s$)
is a lift of $p$ (resp. $X_s$), the unstable manifolds $W^u(\tilde p, \widetilde X_s)$
accumulate to $W^u(g^k\tilde p, \widetilde X_0)^\pm$.}\\
Here, it is worth noting that when a point lies in the accumulation set its whole
{$X_0$-orbit}
is also accumulated. As a consequence, Lemma \ref{lem:C1(s)} tells us that
$W^u(p, X_s)$ accumulate to \break
$g\cdot W^u(p, X_0)^\pm$ in the $C^1$-topology
when $s$ goes to $0_\pm$. Thanks to this topology,
it makes sense to compare the orientations. The result is the following: when $s\to 0_\pm$, then
$W^u(p, X_s)$ accumulate to $\pm g\cdot W^u(p, X_0)^\pm$. Accumulation to
$g^k\cdot W^u(p, X_0)^\pm$
for some $k>1$ is dictated by Lemma \ref{lem:Passages} depending on the sign of
the $\phi$-latitude (knowing $\chi(X_0)>0$).
We are now ready for proving (1) fromTheorem
\ref{thm:selfslideAlg} in each above-enumerated case.
Let $\lambda_-(\gamma)$ (resp. $\lambda_+(\gamma)$) denote the element of the Novikov ring $\Lambda_u$ which is
the contribution of $[\gamma]_{\sim}$ in $n(p,q)^{X_s}$ when $s<0$
(resp. $s>0$). We have to check the next formula up to the given $L>0$ in each case (a.1) ... (b.2).
\begin{equation}\label{change}
\lambda_+(\gamma)= (1+g+g^2+\cdots)\cdot \lambda_-(\gamma)
\end{equation}
\noindent {\bf Case (a.1).} When $s\to 0_-$, the oriented unstable manifolds $W^u(p, X_s)$ accumulate \break to $-g\cdot W^u(p,X_s)^-$ and nothing else. Therefore,
as $b\in \mathcal H^+$, we have $\lambda_-(\gamma)= [\gamma]$.
When $s\to 0_+$, then $W^u(p, X_s)$ accumulate to $+g^k\cdot W^u(p,X_0)^+$ for every $k>0$ and will intersect $g^k\cdot\Delta_0$ transversely at a single point. Thus,
we have
$\lambda_+(\gamma)=(1+g+g^2+\ldots)\cdot [\gamma]$.
The change of $\lambda_\pm(\gamma)$ from $s<0$ to $s>0$
is really given by
Formula \eqref{change}.\\
\noindent {\bf Case (a.2).} As $b\in \mathcal H^-$ and taking into account the accumulation described
right above, we have:
$ \lambda_-(\gamma)= (1- g)\cdot [\gamma]$ and $ \lambda_+(\gamma)= [\gamma]$.
Formula \eqref{change} is still fulfilled.\\
\noindent {\bf Case (b.1).} Here, the accumulation of $W^u(p, X_s)$ is
dictated by part 2) of Lemma \ref{lem:Passages} and the reason for Formula \eqref{change}
is more surprising than in the previous cases.
When $s\to 0_-$, the manifolds $W^u(p, X_s)$ accumulate to $-g\cdot W^u(p, X_0)^-$
and to $-g^2\cdot W^u(p, X_0)^+$ and then nothing else.
When $s\to 0_+$, the manifolds $W^u(p, X_s)$ accumulate to
$+g\cdot W^u(p, X_0)^+$ and nothing else.
As $b\in \mathcal H^+$, we have $\lambda_-(\gamma)= (1-g^2)\cdot [\gamma]$
and
$\lambda_+(\gamma)= (1+g)\cdot [\gamma]$.
Formula \eqref{change} is right since the identity
$(1+g+g^2+\cdots)(1-g^2) = 1+g$ holds in the Novikov ring.\\
\noindent {\bf Case (b.2).} Accumulation is as right above. One derives that
$\lambda_-(\gamma)= (1-g)\cdot [\gamma]$ and $\lambda_+(\gamma)= [\gamma]$. The desired formula is still satisfied.\\
The proof of Theorem \ref{thm:selfslideAlg} is now complete since only one $L$ is involved.
$\Box$\\
\begin{remarque}\label{virtual} One could ask what happens when there is no critical points
$q$ of index $i(p)-1$. The answer is the following. The dichotomy $\mathcal{S}_g^+, \mathcal{S}_g^-$ still exists
by the sign of the character function. Since
the \emph{bifurcation factors} $(1+g+g^2+\cdots)$ and $(1+g)$ do not depend on $q$, one can associate them
with each part of $\mathcal{S}_g\smallsetminus \mathcal{S}_g^0$ even if there is no $q$. In order to validate this association,
it is sufficient to imagine a {\it virtual} zero $q^{virt}$ whose stable manifold intersects $\partial_-\mathcal{M}_p$ along
one generic meridian at the beginning of a positive crossing path of $\mathcal{S}_g$.
The same proof as before tells us how changes the number
of virtual connecting orbits from $p$ to $q^{virt}$.
\end{remarque}
\subsection{\sc Proof of Theorem \ref{thm:selfslideSimplif}.} Here, the statement claims something
to hold \emph{for every
\break$L>-u(g)$} instead of \emph{for a given $L$}. In that case, it is natural that some genericity condition
should be required. The condition in question---that is, a subset of $\mathcal{S}_g$---is the intersection
of all conditions: $X_0\in \mathcal{S}_{g,L}$ for $L\to+\infty$, each of them being the condition
which makes Theorem \ref{thm:selfslideAlg} hold. A priori, this intersection could be empty.
But thanks to Proposition \ref{g-residual}, this intersection is a residual set in $\mathcal{S}_g$ and we are done.
$\Box$\\
\section{\sc The doubling phenomenon. Proof of Theorem \ref{thm:Doubling}}\label{section4}
\subsection{\sc Notations and statement.} In this section, we state and prove the refined version of Theorem \ref{thm:Doubling} which is given right after specifying some definition and notations. It is about the local structure
of $\mathcal{S}_g^0$---the co-oriented locus in $\mathcal{S}_g$ where the character function $\chi$ vanishes---in the complement of $\mathcal{S}_g^{0,0}$, the latter being the locus where both of the extended $\phi$-latitude and $\psi$-latitude vanish.
\begin{defn} \label{decompositionS^0} ${}$
\noindent {\rm 1)} Let $\mathbb{R}_+$ {\rm (}resp. $\mathbb{R}_-${\rm )}
be the set of positive {\rm (}resp. negative{\rm )} real numbers. The open set $\mathcal{S}_g^{0,\pm}\subset \mathcal{S}_g^0$ is defined by the sign of the extended $\phi$-latitude, that is: $X\in \mathcal{S}_g^{0,\pm} \Leftrightarrow \omega_\phi(X)\in \mathbb{R}_\pm$.
\noindent {\rm 2)} Let $X_{0,0}\in \mathcal{S}_g^0\smallsetminus \mathcal{S}^{0,0}_g$. Let $\bigl(\mathcal{D}(s,t):=X_{s,t}\bigr)$ be a \emph{germ} at $X_{0,0}$ of a two-parameter family in $\mathcal{F}_\alpha$, the space of adapted $\alpha$-gradients.
This germ is said to be \emph{adapted} to the pair $(\mathcal{S}_g,\mathcal{S}_g^0)$ if the following conditions are fulfilled: \begin{enumerate} \item The one-parameter family $\bigl(\mathcal{D}(0,t)\bigr)_{t\in \mathcal{O} p(0)}$ is contained in $\mathcal{S}_g$, transverse to $\mathcal{S}_g^0$ and $\partial_t\mathcal{D}(0,0)$ is non-zero and points toward{s} $\mathcal{S}_g^+$.
\item The partial derivative $\partial_s \mathcal{D}(0,0)$ is transverse to $\mathcal{S}_g$ and points towards
its positive side. \end{enumerate} \end{defn}
\noindent In particular, such a $\mathcal{D}$ is transverse to $\mathcal{S}_g^0$.
\begin{thm}\label{thm:DoublingRefined} Let $\mathcal{D}$ be a germ of $2$-discs transverse to $\mathcal{S}_g^0\smallsetminus \mathcal{S}_g^{0,0}$ and adapted to the pair $(\mathcal{S}_g,\mathcal{S}_g^0)$. Then $\mathcal{D}$ intersects $\mathcal{S}_{g^2}$ transversely along an arc of $\mathcal{S}_{g^2}^+$. The trace on $\mathcal{D}$ of the strata $\bparent{ \mathcal{F}_\alpha\,,\,\mathcal{S}_g\cup \mathcal{S}^+_{g^2}\, ,\, \mathcal{S}_g^{0,\pm}}$ is $C^1$-diffeomorphic to
$$\bparent{\mathbb{R}^2,\, \mathbb{R}\times \{0\}\cup\{0\}\times \mathbb{R}_{\pm}\,,\, \{(0,0) \}}\,.$$
Moreover, the natural co-orientation of $\mathcal{S}_{g^2}$ restricts to
the natural co-orientation of $\mathcal{S}_g^{0}$ in $\mathcal{S}_g$ or to its opposite depending upon
$\mathcal{S}_{g^2}$ approaches $\mathcal{S}_g^{0,+}$ or $\mathcal{S}_g^{0,-}$ respectively
{\rm (Figure 2)}.
Finally, if $X_{0,0}$ also fulfills the generic property $\mathcal{S}_{g,\infty}$ {\rm (Definition \ref{almost})}
then the germ $\mathcal{D}$ does not meet $\mathcal{S}_{g^k}$ for $k>2$.
\end{thm}
Actually, the proof of Proposition \ref{g-residual} yields that the last property is generic in $\mathcal{S}^0_g$. Indeed,
the new constraint $\chi(X)=0$ only involves a compact domain of $W^u(p,X)$.
We first prove Theorem \ref{thm:DoublingRefined} for particular germs $\mathcal{D}$ which we call {\it elementary}. Such a germ consists of a one-parameter family of positive {\it normalized} crossing paths of $\mathcal{S}_g$ in the sense of Definition \ref{normalization} with some additional requirements. The definition of elementary path looks a bit strange, but it is inspired by a {\it toy model} of crossing $\mathcal{S}_g$ when all moving objects are
affine subspaces in the coordinates of $\mathcal{M}_p$.
\subsection{\sc Elementary crossing path.} Let $(X_s)_s$ be a normalized positive crossing path of $\mathcal{S}_g$. After the normalization (Proposition \ref{conjugation}) we are still allowed
to prescribe more special dynamics of $X_s$;
the {\it perturbed holonomy} will be specified near the respective homoclinic orbit $\ell$ of $X_0\in \mathcal{S}_g$
Let $a^\pm= \ell\cap \partial_\pm\mathcal{M}_p$; let $(\phi_0, 0,-)$ and $(-,0,\psi_0)$ be the respective muti-spherical coordinates
of $a^-$ and $a^+$.
Consider the spherical annulus $\mathbb{A}_{\psi_0}:=\Sigma^-\times (0,1)\times \{\psi_0\}\subseteq \partial^-\mathcal{M}_p$.
Assume the $\psi$-latitude of $a^+$ different from zero---by its very definition it is always the case when $X_0\in \mathcal{S}_g^{0,\pm}$.
Therefore, whatever the perturbed holonomy $H_s$ along $\ell$ the inverse image $D'_1(s) $ of $\Sigma^+$ by $H_s$
is transverse to $\mathbb{A}_{\psi_0}$ for every $s$ close to 0. Call $b(s)$ the intersection point $D'_1(s)\cap \mathbb{A}_{\psi_0}$ when the intersection is non-empty; this is the case either when $s< 0$ or $s>0$ depending on whether the $\psi$-latitude $\cos_\psi(a^+)$ is \emph{negative} or \emph{positive}. By normalization, $b(s)$ belongs to the ray $\{(\phi_0,r,\psi_0)\mid r\geq 0\}$. Below, we use Notation \ref{not:C1(s)}.
\begin{defn}\label{elementary} The germ $\left(X_s\right)_{s}$ is said to be \emph{elementary} if it is normalized {\rm (}Definition \ref{normalization} {\rm )} and the following conditions are fulfilled. \begin{enumerate} \item The disc $D_1(s):= H_s(\Sigma^-)\cap\{z=1\}$ moves in the meridian disc $\{\psi=\psi_0\}$ while remaining parallel to the preferred hyperplane $\Delta^\phi$ {\rm (\ref{eq:HypPhi})}. \item Let $a^+(s)$ be the intersection of $D_1(s)$ with the pole axis. For every $s$, \begin{equation} \label{elem-1}
\partial_s a^+(s) =1.
\end{equation} \item For every $s$ close to 0 the velocity of $b(s)$ is \begin{equation} \label{elem-2} \partial_s b(s) =-\frac{1}{\eta\,\cos_\psi(a^+)}\ .
\end{equation}
Here, $\eta$ stands for the holonomic factor of $X_0$ {\rm (Definition \ref{def:HolFactor})}. \end{enumerate} \end{defn}
This definition makes sense only when the $\psi $-latitude of $a^+(X_0)$ is not $0$, that is, when $X_0$ does not lie on the $\phi$-axis $\mathcal{S}_g^\phi$ \eqref{axis}. This
is always the case when $X_0$ belongs to $\mathcal{S}_g^0\smallsetminus \mathcal{S}_g^{0,0}$.
\begin{lemme}\label{lemme_elementary} Let $X_{0}\in \mathcal{S}_g\smallsetminus \mathcal{S}_g^\phi$ be an $\alpha$-gradient in normal
form. Then there exists a germ of elementary path $\left(X_s\right)$
passing through $X_0$ and depending smoothly on $s$ in the $C^1$-topology.
\end{lemme}
\nd {\bf Proof.\ } Recall the tube $T$ with coordinates $(x,y,v,z)$ around the restricted homoclinic orbit $\underline{\ell}$ of $X_0$.
The holonomy $H_0$ is defined on a neighborhood of $\{z=0\}$ in $\partial^-\mathcal{M}_p$
and valued in a neighborhood of $\{z=1\}$ in $\partial^+\mathcal{M}_p$.
Since we are looking for a one-parameter perturbation $(X_s)$ of $X_0$ whose properties are readable in $T$ it is sufficient to describe it is near $T$.
For $|s|$ small enough, the perturbed holonomy always reads
$H_s=H_0\circ K_s$ where $K_s$
is a diffeomorphism of $\partial^-\mathcal{M}_p$ supported in its interior with $K_0= Id$.
In order to satisfy conditions (\ref{elem-1}) and (\ref{elem-2}) of Definition \ref{elementary}, we first choose $a^+(s)$
and $D_1(s)$ before choosing $K_s$. For $a^+(s)$ we take the point in $\{\psi=\psi_0\}$ moving in the oriented pole axis with velocity $+1$
such that $a^+(0)=a^+$. For $D_1(s)$ we take the paralell disc to $D_1(0)$ passing through $a^+(s)$.
Take $K_s$, smooth with respect to $s$, such that:
\begin{equation}\label{K_s}
\left\{
\begin{array}{l}
K_s(a^-)= H_0^{-1}(a^+(s))\\
K_s(\Sigma^-)
=H_0^{-1}(D_1(s)) \text{ in } \mathcal{O} p\{z=0\}
\end{array}
\right.
\end{equation}
Thus, the first two items are fulfilled. Note that by normalization of $X_0$ the point $K_s(a^-)$
runs on a prescribed curve in the meridian $\{\phi=\phi_0\}$, namely the curve $H_0^{-1}(\mathbb{R}\nu_\phi)$. Its velocity
at time $s=0$ is the vector $\partial_v^0$. By (\ref{eq:HolonDec}) we have
\begin{equation}
\langle \partial_v^0, \nu_\psi\rangle= \frac 1\eta\,.
\end{equation}
We are now dealing with the last item.
We impose $D'(s)= K_s^{-1}(D'(0))$ to move in the meridian $\{\phi=\phi_0\}$; this is possible as the point
$\kappa_s:= K_s^{-1}(a^-)$ already moves in this meridian by normalization. There are two more contraints:
the first one is $\partial_s\kappa_s\vert_{s=0}= -\partial_v^0$ by Lemma \ref{velocity}; the second one is (\ref{elem-2}).
This two constraints are compatible since $\langle \partial_s b(s), \nu_\psi\rangle= -\frac 1\eta$.
For having a one-parameter family $(K_s)$ of diffeomorphisms of
$\partial^-\mathcal{M}_p$ converging $C^1$ to identity when $s$ goes to 0,
one has to choose conveniently $\partial_sK_s$ at time $s=0$. But is is easy to achieve since the velocity distribution
is given along transverse submanifolds in the extended space $\left(\partial^-\mathcal{M}_p\right)\times \mathcal{O} p(0)$.
$\Box$\\
\begin{remarque} Note the great difference between the normalization process of a crossing path and the building
of an elementary crossing path. The first one is achieved by an ambient $C^1$-conjugation; so,
it does not change the dynamics. The second one is a genuine bifurcation.
\end{remarque}
Clearly, Lemma \ref{lemme_elementary} holds with parameters, for instance when the data is a one-parameter family
in $\mathcal{S}_g\smallsetminus \mathcal{S}_g^\phi$.
Then, the next corollary follows.
\begin{cor}\label{cor_elementary} Let $X_{0,0}\in \mathcal{S}_g^0\smallsetminus\mathcal{S}_g^{0,0}$ and let $\gamma(t)=\left(X_{0,t}\right)_t$ be a germ
of path in $\mathcal{S}_g$ passing through $X_{0,0}$ and crossing $\mathcal{S}^0_g$ transversely, such that
$\frac{\partial\gamma}{\partial t}(0)$ points towards $\mathcal{S}_g^+$. Then, there exists a
two-parameter family $\mathcal{D}=\left(X_{s,t}\right)$
of pseudo-gradients of $\alpha$ adapted to $(\mathcal{S}_g,\mathcal{S}^0_g)$ such that,
for every $t$ close to $0$, the path $s\mapsto X_{s,t}$ is elementary.
Moreover, there are such $X_{s,t}$
which are smooth with respect to the parameters
in the $C^1$-topology.\\
\end{cor}
\begin{defn}\label{two-elementary} Let $\mathcal{D}$ be a $2$-disc in $\mathcal{F}_\alpha$ transverse to $\mathcal{S}_g^0\smallsetminus \mathcal{S}_g^{0,0}$ and adapted to the pair $(\mathcal{S}_g,\mathcal{S}_g^0)$. We say that $\mathcal{D}$ is \emph{elementary} if it is made of a one-parameter family of elementary crossing paths as in {\rm Corollary \ref{cor_elementary}}. \end{defn}
\noindent{\bf Proof of Theorem \ref{thm:DoublingRefined}.}
First, we prove the theorem in the particular case where the transverse disc $\mathcal{D}$ is elementary. Even in this particular case the proof is slightly different depending on where the base point $X_{0,0}$ lies either in $\mathcal{S}_g^{0,-}$ or in $\mathcal{S}_g^{0,+}$. In each case, the proof has three items: \begin{enumerate} \item[\bf 1.] What is the trace of $\mathcal{S}_{g^2}$ on $\mathcal{D}$? Is there a non-empty trace of $\mathcal{S}_{g^k}$ for $k\neq 1\text{ or } 2$? \item[\bf 2.] Is $\mathcal{D}$ transverse to $\mathcal{S}_{g^2}$? How is the positive co-orientation of $\mathcal{S}_{g^2}$? \item[\bf 3.] Which part $\mathcal{S}_{g^2}^+$ or $\mathcal{S}_{g^2}^-$ is intersected by $\mathcal{D}$?\\ \end{enumerate}
\noindent{\bf Case $X_{0,0}\in \mathcal{S}_g^{0,-}$.} In other words, $a^-(X_{0,0})$ has a negative $\phi$-latitude.
\noindent {\bf 1.} The pseudo-gradient $X_{0,t}$ has a homoclinic orbit $\ell_t$ based in $p$ and the $\phi$-latitude of
$a^-(X_{0,t})$ lies in $[-1,0)$ for every $t\in \mathcal{O} p(0)$. Denote by $\phi_t$ the spherical coordinate of
$a^-(X_{0,t})$. We use the tube $T$ around $\ell_0$ and its extremities: $\{z=0\}\subset \partial^-\mathcal{M}_p$
and $\{z=1\}\subset \partial^+\mathcal{M}_p$.
For simplicity, we specify even more the path $\left(X_{0,t}\right)_t$
by adding some assumptions (the discussion is similar with the other cases of latitudes by using
other specifications\,\footnote{ If $\omega_\phi(X_{0,0})=-1$, one makes $\frac{\partial\eta}{\partial t}(X_{0,t})>0$. Since $\omega_\psi(X_{0,0})$ must be positive, $\frac{\partial \chi}{\partial t}(X_{0,t})>0$.}): \begin{enumerate} \item[(i)] The $\phi$-equator of $X_{0,t}$ is fixed and the $\phi$-latitude $\cos_\phi(a^-(X_{0,0}))$ is not equal to $-1$.
\item[(ii)] The point $a^+(X_{0,t})= \Sigma^+\cap\ell_t$ and the $\psi$-equator of $X_{0,t}$
are fixed. \item[(iii)] The holonomic factor $\eta(X_{0,t})$ remains
constant and is denoted by $\eta$. \end{enumerate} Note
that (i) allows one to take $(X_{0,t})_t$ positively transverse to $\mathcal{S}_g^0$ while satisfying (ii) and (iii). More precisely, one makes the $\phi$-coordinate $\phi_t$ of $a^-(X_{0,t})$ vary on $t$ by increasing the $\phi$-latitude.
Denote the spherical coordinate of $a^+(X_{0,t})$ by $\psi_0$, independent of $t$. In this setting, as the paths $s\mapsto X_{s,t}$ are elementary the discs $D_1(s,t)\subset \{z=1\}$ depend only on $s$ and are denoted by $D_1(s)$. For every $s\neq 0$, their images by the descent map are discs $C_1(s)$ contained in the {\it spherical annulus} $\mathbb{A}_{\psi_0}:= \{(\phi,r,\psi_0)\mid \phi\in \Sigma^-, r\in [0,1]\}$. When $s$ goes to $0_-$, by Lemma \ref{lem:C1(s)} the discs $C_1(s)$ accumulate to the negative hemisphere $\mathcal{H}^-(\Sigma^-)$.
Since $s\mapsto X_{s,t}$ is elementary and $\cos_\psi(\psi_0)>0$, the disc $D'_1(s,t)$, preimage in $\{z=0\}$ of $\Sigma^+$ by the respective perturbed holonomy, intersects $\mathbb{A}_{\psi_0}$ in one point $b(s,t)$ when $s\leq 0$ and nowhere when $s>0$, according to Definition \ref{elementary}. When $t$ is fixed, $b(s,t)$ moves on the ray
$\{(\phi_t, r,\psi_0)\mid r\geq 0\}$ and its velocity is given by the formula in Definition \ref{elementary}. According to Remark~\ref{rem:g2}, we have: \begin{equation}\label{equationSg2} X_{s,t}\in \mathcal{S}_{g^2}\quad \text{if and only if}\quad b(s,t)\in C_1(s). \end{equation}
Denote by $c_1(s,t)$ the intersection point of $C_1(s)$ with the meridian disc $\{\phi= \phi_t\} $. When $t$ is fixed, $c_1(s,t)$ also moves on the ray $\{(\phi_t, r,\psi_0) \mid r\geq 0\}$ and its radial velocity is the same as the one of its lift through $Desc$ in $D_1(s)\subset \partial M_p^+$. Therefore, \begin{equation} \partial_s c_1(s,t)= \frac {1}{\cos_\phi(\phi_t)} \end{equation} As $X_{0,0}$ belongs to $\mathcal{S}_g^0$, that is $\chi(X_{0,0})=0$, the curves $b(s,0)$ and $c_1(s,0)$, defined for $s<0$, have the same radial velocities. Since both tend to $a^-(X_{0,0})$ on the same ray when $s$ goes to $0_-$, we have $b(s,0)=c_1(s,0)$ for every $s$. Then, (\ref{equationSg2}) tells us that $X_{s,0}\in\mathcal{S}_{g^2}$ for every $s$ close to $0$ negatively.
For $t\neq 0$ and $s<0$, the radial velocities of $c_1(s,t)$ and $b(s,t)$ are distinct while their limits when $s$ goes to $0$ coincide. Therefore, (\ref{equationSg2}) tells us that $X_{s,t}$ never lies in $\mathcal{S}_{g^2}$ for $s<0$.
When $s>0$, the discs $C_1(s) $ accumulate to the positive hemisphere $\mathcal{H}^+(\Sigma^-)$. There is no chance
for $C_1(s)$ to intersect $D'_1(s,t)$ which is far from any point in $\mathcal{H}^+(\Sigma^-)$.
What about $\mathcal{S}_{g^k}$? If $k\leq 0$, we have $u(g^k)\geq 0$ and there is no homoclinic orbit in the homotopy class $g^k$. When $k>2$, we have to discuss the successive passages of the unstable manifold $W^u(p,X_{s,t})$ in $\partial^-\mathcal{M}_p$, more precisely in $\{z=0\}$.
By Lemma \ref{lem:Passages}, if $t>0$, that is $\chi(X_{0,t})>0$, and $s<0$ only the discs
$C_2(s,t)$ of the second passage
are non-empty, but they accumulate to the positive hemisphere $\mathcal{H}^+(\Sigma^-)$.
Therefore, no further passage could give rise to a homoclinic orbit. When $s>0$, even the second passage does not exist.
If $t<0$, one is able to see that there are infinitely many passages in $\{z=0\}$. But, by velocity considerations
$C_k(s,t)$ never meet $D'_1(s,t)$. We do not give more details here because this is similar
to the symmetric case $X_{0,0}\in \mathcal{S}_g^{0,+}$ and $t>0$ where the analysis of velocities
will be completely achieved. Thus, the first item of case $X_{0,0}\in \mathcal{S}_g^{0,-}$ is proved.\\
\noindent{\bf 2.} The reason for transversality to $\mathcal{S}_{g^2}$ relies again on some computations of velocity.
Define for $s\leq 0$:
$$
\delta(s,t):= v\left(c_1(s,t)\right)-v\left(b_1(s,t) \right) \quad \text{and} \quad V(t):=\frac{\partial\delta(s,t)}{\partial s} \vert_{s=0}\,. $$ Although points are not the same, this velocity $V(t)$ at $s=0$ is easily checked to be
the same as the velocity computed in Lemma \ref{geometry-character}. Then, for every $t$ close to $0$
we have:
\begin{equation}\label{V}
V(t)= \eta\,\frac{\cos_{\psi}(\psi_0)}{\cos_\phi(\phi_t)} +1\quad \text{which implies} \quad \frac{dV(t)}{dt}<0\,.
\end{equation}
By definition of the character function, we have $V(0)=0$ which implies $V(t)<0$ for $t>0$.
Define $V(s,t):= \partial_s\delta (s,t)$. By construction of $(X_{s,t})$, we have $V(s,0)=0$ for every $s<0$ close to 0.
By (\ref{V}), the second partial derivative
$\partial^2_{t s}\delta(s,t)$ is negative for every $(s,t)$ close to $(0,0)$ with $s\leq 0$ (here we use the
smoothness with respect to the parameters\footnote{The vector fields in a normalized crossing path are
not smooth with respect to the space variable. Their holonomy is $C^1$ only. Nevertheless, as the
$C^1$-maps (of degree zero) $\partial^-\mathcal{M}_p\to \partial^+\mathcal{M}_p$ form a Banach manifold it makes sense to
consider a smooth family of such holonomies.}).
By integrating
in the variable $s$ from $s_0<0$ to $0$ and noticing that $\delta(0,t)=0$, we get: \begin{equation}\label{pt}
\frac{\partial \delta}{\partial t} (s_0,t)= -\frac{\partial}{\partial t} \left(\int_{s_0}^{0}\frac{\partial \delta}{\partial s}(s,t)\, ds\right)
= -\left(\int_{s_0}^{0} \partial_{ts}^2\delta(s,t)\, ds\right) >0.
\end{equation}
For $t=0$, this is exactly the transversality of $\mathcal{D}$ to $\mathcal{S}_{g^2}$ at $X_{s_0,0}$. \\
We are now looking at orientation. Take
$s_0<0$ such that $b(s_0,0)$
lies in $\{z=0\}$. It belongs to a homoclinic orbit $\ell'$ in the homotopy class $g^2$. There is a tube
$T'$ around $\ell'$ with coordinates $(x',y', v', z') $. The $y'$-axis is contained in
$D'_1(s_0,0)$ and is given a co-orientation which follows from the co-orientation
of $D'_1(0,0)$ by continuity.
The $x'$-axis is contained in $C_1(s_0)$. Its projection to the $x$-axis is orientation reversing
(Lemma~\ref{lem:C1(s)}). Therefore:
\begin{equation} \label{w}
v(\partial_{v'}) <0.
\end{equation}
By (\ref{pt}) we have: ${\displaystyle \frac{\partial}{\partial t}[v\left(c_1(s_0,t)\right)-v\left(b_1(s_0,t) \right)]_{\vert_{t=0}} = \frac{\partial \delta}{\partial t} (s_0,0)>0}$.
By replacing $v$ with $v'$ in the last inequality, we get:
$$ \frac{\partial}{\partial t}[v'\left(c_1(s,t)\right)-v'\left(b_1(s,t) \right)]_{\vert_{t=0}}<0\,.$$
This translates the fact that $\partial _t$ points to the negative side of $\mathcal{S}_{g^2}$ for $s<0$
while for $s=0$, $\partial _t$ defines the positive co-orientation of $\mathcal{S}_g^0$ in $\mathcal{S}_g$.\\
\noindent{\bf 3.} Let $L>0$. Consider a small circle $\gamma\subset \mathcal{D}$ centered at the origin of the coordinates $(s,t)$ and turning positively with respect to the orientation given by these coordinates. If the radius of $\gamma$ is small enough\footnote{ The more $L$ is large, the more this radius has to be small.}
and if $X_{0,0}$ is generic, $\gamma$ avoids all codimension-one strata in $\mathcal F_\alpha$ except: \begin{itemize} \item $\mathcal{S}_g$ which is crossed once in $\mathcal{S}_g^-$ positively, and once in $\mathcal{S}_g^+$ negatively,
\item $\mathcal{S}_{g^2}$ which is crossed once positively according to the above discussion. \end{itemize} As noted in Remark \ref{virtual} each crossed signed stratum is endowed with a \emph{bifurcation factor}. The product of these factors should be equal to 1 up to $L$ in the Novikov ring after traversing $\gamma$ once.
The bifurcation factor of the a small sub-arc of $\gamma$ crossing $\mathcal{S}_{g^2}$ is still unknown; call it $m(g)$. This (commutative) product is $$m(g)\cdot(1+g+g^2+\cdots)^{-1}\cdot (1+g)=1\quad{\rm(mod.}\ L). $$ Then, $m(g)=(1-g^2)^{-1} \ ({\rm mod.}\ L)$, that can only happen if the crossing of $\mathcal{S}_{g^2}$ takes place in $\mathcal{S}_{g^2}^+$. The proof of Theorem \ref{thm:DoublingRefined} is complete for an elementary 2-disc in the case $X_{0,0}\in \mathcal{S}_g^{0,-}$.\\
\noindent{\bf Case $X_{0,0}\in \mathcal{S}_g^{0,+}$.} In other words, $a^-(X_{0,0})$ has a positive $\phi$-latitude.
\noindent{\bf 1.} The discussion is led in the same manner as in the previous case with same notation. We only mention the main differences. Here, $a^-(X_{0,0})$ belongs to the positive hemisphere of $\Sigma^-$ while $\psi_0$ belongs to the negative hemisphere of $\Sigma^+$. The discs $C_1(s)$ intersect $\{z=0\}$ only when $s>0$. Therefore, for $s<0$ there is no chance for meeting $\mathcal{S}_{g^k}$, for any $k\neq 0$.
By Lemma \ref{lem:Passages} there are infinitely many passages $C_k(s), \ k\geq 1, s>0$ of $W^u(p, X_{s,t})$ in $\partial^-\mathcal{M}_p$ meeting $\{z=0\}$. Recall that the $(i-1)$-discs $C_k(s)$ do not depend on $t$. The fact that $X_{s,t}$ belongs to $\mathcal{S}_{g^2}$ if and only if $s>0$ and $ t = 0$ is proved exactly as in the previous case.
Then, we are left to show that for every $k>2$, $\mathcal{S}_{g^k}$ does not intersect $\mathcal{D}$. Here it is important to think of $\mathcal{D}$ as a germ because for a given representative this result is not true; when $k$ increases, the domain of the representative has to be restricted. Let $C_k(s,t)$ denote the $(i-1)$-disc in $\partial^-\mathcal{M}_p$ corresponding to the $k$-th passage of the unstable manifold $W^u(p, X_{s,t})$ (see Lemma \ref{lem:Passages}); let $D'_1(s,t)$ denote the $(n-i-1)$-disc
corresponding to the first passage of the stable manifold $W^s(p, X_{s,t})$. Observe that $\mathcal{D}$ intersects $\mathcal{S}_{g^{k+1}}$ if and only if, for $(s,t)$ close to $(0,0)$, $C_k(s,t)$ intersects $D'_1(s,t)$. This translates
in the next equation:
\begin{equation}
c_k(s,t)= d'_k(s,t)
\end{equation} where $c_k(s,t)$ and $d'_k(s,t)$ are the only two points of $\{z=0\}$ lying respectively on $C_k(s,t)$ and $D'_1(s,t)$ which have the same $(x,y)$-coordinates. Then, the above equation becomes:
\begin{equation}\label{equality}
v\bigl( c_k(s,t)\bigr)= v\bigl(d'_k(s,t)\bigr)\,.
\end{equation} When $s$ goes to $0$, these two points go to the same point
$a^-(X_{s,t})\in\Sigma^-$. By computations done in the proof of Lemma \ref{lem:Passages},
we know that: \begin{equation} \frac{\partial}{\partial s}\left[v\bigl( c_k(s,t)\bigr)- v\bigl(d'_k(s,t)\bigr)\right]_{\vert_{s=0}}\neq 0\,.
\end{equation} It follows that, for $s$ close to $0$ (closeness depending on $t$), the equation (\ref{equality}) cannot be fulfilled.
The answer to questions {\bf 2} and {\bf 3} are exactly as in the case $X_{0,0}\in \mathcal{S}_g^{0,-}$. Then, Theorem \ref{thm:DoublingRefined} is proved for elementary 2-discs as in Definition \ref{two-elementary}.\\
For finishing the proof of Theorem \ref{thm:DoublingRefined} it is suitable to use some $C^1$-topology. More precisely, we choose a system of finitely many closed flow boxes $(B_j)_{j\in J}$ of $X_{0,0}$ whose end faces $\partial_\pm B_j$ are tangent to $\ker\alpha$
and union covers $M$ except a small open neighborhood $N$ of the zero set of $\alpha$.
It is assumed that when slightly shrinking every $B_j$ to $B'_j$ tangentially to $\ker\alpha$, the union $\cup_j B'_j$ still covers $M\smallsetminus N$. Fix a closed $C^0$-neighborhood $U$ of $X_{0,0}$ among the \emph{uniquely integrable} vector fields whose \emph{transverse holonomy is well defined} for every $j\in J$ from $\partial_-B'_j$ to $ \partial_+B_j$ and of class $C^1$. This $U$ endowed with the $C^0$-topology of vector fields and the $C^1$-topology of holonomies $B'_j\to B_j,\, j\in J,$ may be thought of as a closed ball in a Banach manifold.
By Proposition \ref{conjugation}, there exists a neighborhood $V$ of $X_{0,0}$ in $\mathcal{S}_g^0$ such that the following properties are fulfilled for every $Y\in V$:
\begin{itemize} \item[--] there exists a $C^1$-diffeomorphism $\Upsilon_Y$ of $M$ which carries $Y$ to a vector field in normal form, that is, $\left(\Upsilon_Y\right)_*Y$ is normalized; \item[--] $\Upsilon_Y$ preverses the strata $\mathcal{S}_g$, $\mathcal{S}_g^0$ and $\mathcal{S}_g^{0,0}$ (Proposition \ref{invariance}). \end{itemize} It is easy to make $\Upsilon_Y$ depend continuously on $Y$ in the $C^1$-topology with the property that $\Upsilon_Y= Id$ when $Y$ is already in normal form.
Let $\mathcal{D}$ be an elementary 2-disc centered at $X_{0,0}$. If $Y\in V$ is close enough to $X_{0,0}$ and normalized one finds an elementary 2-disc $\mathcal{D}_Y$ centered at $Y$, depending $C^1$ on $Y$ and equal to $\mathcal{D}$ if $Y= X_{0,0}$. If $Y\in V$ is not normalized we still have a 2-disc centered at $Y$, namely \begin{equation} \mathcal{D}_Y:=\left(\Upsilon_Y\right)_*^{-1}\mathcal{D}_{\Upsilon_*Y}. \end{equation} This $\mathcal{D}_Y$ is not elementary is the strict sense but it is conjugate to an elementary 2-disc in $U$. As $\Upsilon_Y$ preseves the stratification, in particular $\mathcal{S}_{g^2}$, the intersection of $\mathcal{D}_Y$ with the different strata under consideration is the same as in the elementary case. Finally, we have a $C^1$-map $$ F: V \times [-1,+1]^2\to U, \quad (Y,s,t)\mapsto F(Y,s,t) $$ which meets $\mathcal{S}_{g^2}$ if and only if $t=0$ and $s\in \mathbb{R}_\pm $ depending upon $X_{0,0}\in \mathcal{S}_g^{0,\pm}$. Moreover, the germ of $F$ at $(s,t)=(0,0)$ avoids all $\mathcal{S}_{g^k}$ for $k\neq 1,2$.
One checks that $span\{ \partial_sF,\partial_tF\}$ at $(s,t)=(0,0)$ is transverse to $\mathcal{S}_g^0$ in $U$. The Inverse Function Theorem is available and states that for $V$ small enough
$F$ is a $C^1$-diffeomorphism onto its image $\mathcal N$,
an open set in $U$. Therefore $\mathcal N$ has a product structure and a projection $P: \mathcal N\to [-1,+1]^2$ such that, for every $X\in \mathcal N$, the following equivalences hold: \begin{equation}\label{equiv} \left\{ \begin{array}{lcl}
X \in \mathcal{S}_g & \Longleftrightarrow& \bparent{s\circ P}(X)=0\\
X \in \mathcal{S}_g ^0 &\Longleftrightarrow & P(X)=(0,0)\\
X \in \mathcal{S}_{g^2}& \Longleftrightarrow & \bparent{t\circ P}(X)=0 \text{ and }
\bparent{s\circ P}(X)\in \mathbb{R}_\pm
\text{ depending on }X_{0,0}\in \mathcal{S}_g^{0,\pm}.
\end{array} \right. \end{equation}
Let $\mathcal{D}'$ be any germ of two-parameter family centered in $X_{0,0}$ transverse to $\mathcal{S}_g^0\smallsetminus\mathcal{S}_g^{0,0}$ and contained in $\mathcal N$. Its projection $P\circ \mathcal{D}'$ is submersive. The equivalences (\ref{equiv}) finish the proof of Theorem
\ref{thm:DoublingRefined}.
$\Box$\\
\appendix \label{appendix}
\section{\\ Proof of the key fact \ref{deep}} Let us recall the statement in question.
\begin{prop}\label{deep-appendix} Let $X$ be an $\alpha$-gradient which is assumed Kupka-Smale.
Let $p$ and $q$ be two zeroes of $\alpha$ of respective Morse indices $k$ and $k-1$. Then, for every $L>0$ the number of connecting orbits from $p$ to $q$ whose $\alpha$-length is bounded by $L$ is finite. \end{prop}
\nd {\bf Proof.\ } The proof mainly consists of comparing the $\alpha$-length of any $X$-trajectory $\gamma$ drawn on the unstable manifold $W^u(p, x)$ to the distance of its end points after lifting $\gamma$ to the universal cover of $M$. By definition, the $\alpha$-length $\mathcal L(\gamma)$ is additive with respect to any finite subivision of the considered trajectory $\gamma$. \\
\noindent {\sc First part.} For a trajectory $\gamma$ descending from the top of a Morse model $\mathcal{M}(z)$ about any zero $z\in Z(\alpha)$ to the bottom of $\mathcal{M}(z)$ without getting out of it, $\mathcal L(\gamma)$ is equal to the oscillation of any local primitive of $\alpha$ on $\mathcal{M}(z)$. And hence, it does not depend on $\gamma$. Without loss of generality, we may assume that this oscillation is the same for every zero of $\alpha$; it is noted $h$. Therefore, given a trajectory $\gamma$ of $\alpha$-length bounded by $L$, the number $\kappa$ of segments traced on $\gamma$
by the compact union $\mathcal{M}:= \cup_{z\in Z(\alpha)}\mathcal{M}(z)$ fulfills
\begin{equation}\label{number}
\kappa\leq \frac Lh\, .
\end{equation}
In the complement of $\mathcal{M}$, that is away from the zeroes of $\alpha$, there is some positive constant $C$ such that
we have
$\vert X(x)\vert \geq C$
for every $x\in \mathcal{M}^*:= M\smallsetminus \mathcal{M}$.
Denote by $\lambda(\gamma): = \int_\gamma\vert\dot\gamma\vert$ the length of a path $\gamma$ and set
$\gamma^*:= \gamma\cap \mathcal{M}^*$. Then, for every $X$-trajectory $\gamma$ whose $\alpha$-length is bounded by $L$, we deduce
\begin{equation}\label{length*}
C\lambda(\ell^*)= C\int_{\ell^*}\vert X(\gamma(t))\, dt\vert \leq \int_{\ell^*}\vert X(\gamma(t))\vert^2=\left\vert\int_{\ell^*}\alpha\right\vert\leq L\,.
\end{equation}
Here the variable $t$ is the time of the flow of $X$.
Denote by $W_L(p)$ the union of the $X$-trajectories descending from $p$ whose $\alpha$-length is less than $L$. This is an open domain in the unstable manifold $W^u(p,X)$; it is homeomorphic to an open ball whose dimension is $k$. Let $\tilde M\mathop{\to}\limits^{\pi}M$
be the universal cover of $M$ and let $\tilde p\in \pi^{-1}(p)$. Let $\tilde X$ be the lift of $X$ to $\tilde M$.
It is a hyperbolic vector field and the unstable manifold $W^u(\tilde p, \tilde X)$ is the lift of $W^u(p,X)$
through $\tilde p$. Moreover, its truncation $W^u_L(\tilde p, \tilde X)$ is the lift of $W^u_L(p, X)$.
Let $\ell$ be an $X$-trajectory descending from $p$ in $W^u_L(p, X)$, let $e$ be its end point. Take its lift $\tilde\ell$
from $\tilde p$ and denote by $\tilde e$
its end point. One looks at the subdivision $S$ of $\tilde\ell$ marked by its crossings with $\pi^{-1}(\mathcal{M})$.
From (\ref{number}), (\ref{length*}) and the triangular inequality applied to the vertices of $S$, one deduces that the lifted distance satisfies \begin{equation} \label{radius} d(\tilde p, \tilde e)\leq \frac LC +\frac Lh\delta=:R(L) \end{equation} where $\delta$ stands for the maximal diameter of $\mathcal{M}(z), \, z\in Z(\alpha)$. Therefore, we have an inclusion \begin{equation}
W_L^u(\tilde p,\tilde X)\subset B(\tilde p, R(L)) \end{equation} where $B(\tilde p, R)$ stands for the ball of $\tilde M $ about $\tilde p$ of radius $R$ and where $R(L)$ is the right hand side of (\ref{radius}). The consequence of these elementary estimations is that the closure $cl_L(\tilde p)$ of $\tilde W^u_L(\tilde p,\tilde X)$ is compact. In particular, it contains finitely many lifts of the zero $q$ that we are interested in. Indeed these lifts cannot accumulate as their mutual distance is bounded from below.\\
\noindent {\sc Second part.} The end of the proof uses the $KS$ assumption. Let $\tilde f$ be a global primitive of $\pi^*\alpha$; it exists since $\tilde M$ is simply connected. The descending
gradient of $\tilde f$ with respect to the lifted metric is equal to $\tilde X$. The truncation of $W^u(\tilde p, \tilde X)$ to the upper level set $\{\tilde f>\tilde f(\tilde p)-L\}$ is exactly the truncation $\tilde W^u_L(\tilde p,\tilde X)$. The end point of a gradient line of $\alpha$-length $L$ belongs to the level set $\{\tilde f=\tilde f(\tilde p)-L\}$.
Let us enumerate $\tilde q_1, \ldots, \tilde q_m$ the lifts of $q$ which belongs to $cl_L(\tilde p)$.
Now, we can argue as in usual Morse theory. For $1\leq j\leq m$, consider the Morse model $\mathcal{M}(\tilde q_j)$ and the so-called co-sphere $\Sigma_j$, a sphere of dimension $(n-k)$ in the top of $\mathcal{M}(\tilde q_j)$. By the $KS$ assumption, the two following properties hold for every $j=1, ..., m$: \begin{enumerate} \item the singular part (that is the frontier) of $cl_L(\tilde p)$ avoids $\Sigma_j$; \item the regular part, that is $ W^u_L(\tilde p, \tilde X)$, is transverse to $\Sigma_j$. \end{enumerate} It is classical that the compactness of $cl_L(\tilde p)$ joined to these two properties implies the finitness of $\Sigma_j\cap \tilde W_L(p)$. Therefore, there are finitely many orbits of $\tilde X$ descending from $\tilde p$ and ending at $\tilde q_j$ for every $j$. This is the desired finiteness.
$\Box$\\
\vskip 1cm
\end{document}
|
arXiv
|
Comparison of different calibration techniques of laser induced breakdown spectroscopy in bakery products: on NaCl measurement
Gonca Bilge1,
Kemal Efe Eseller ORCID: orcid.org/0000-0002-9758-48522,
Halil Berberoglu3,
Banu Sezer4,
Ugur Tamer5 &
Ismail Hakki Boyaci4
Laser induced breakdown spectroscopy (LIBS) is a rapid optical spectroscopy technique for elemental determination, which has been used for quantitative analysis in many fields. However, the calibration involving atomic emission intensity and sample concentration, is still a challenge due to physical-chemical matrix effect of samples and fluctuations of experimental parameters. To overcome these problems, various chemometric data analysis techniques have been combined with LIBS technique. In this study, LIBS was used to show its potential as a routine analysis for Na measurements in bakery products. A series of standard bread samples containing various concentrations of NaCl (0.025%–3.5%) was prepared to compare different calibration techniques. Standard calibration curve (SCC), artificial neural network (ANN) and partial least square (PLS) techniques were used as calibration strategies. Among them, PLS was found to be more efficient for predicting the Na concentrations in bakery products with an increase in coefficient of determination value from 0.961 to 0.999 for standard bread samples and from 0.788 to 0.943 for commercial products.
Laser induced breakdown spectroscopy (LIBS) is an atomic emission spectroscopy technique in which laser beam excites and intensively heats the surface of sample. Excited sample is taken to a gaseous plasma state and dissociated to all molecules and fine particles, which produces a characteristic plasma light. Intensity of this plasma light is associated with concentration of the elements in the sample. LIBS has many advantages as it allows for rapid, real-time and in situ field analysis without the need for sample preparation [1,2,3,4,5,6,7,8,9,10]. Moreover, its application has expanded to the fields such as metallurgy, mining, environmental analysis and pharmacology [11,12,13,14].
Intensity of LIBS signal is influenced by various factors including laser energy, detection time window, distance between lens and chemical and physical matrix [15]. Chemical matrix effect is the most important one since the molecular and chemical composition of the sample is directly related with chemical matrix, and it perturbs the LIBS plasma [16]. Minor elements in the sample structure can cause matrix effects and interactions on the major element spectral lines. Furthermore, LIBS signal intensity is influenced by atmospheric composition, and occurred plasma products are interacted with sample surface. To overcome matrix effect, many approaches have been developed. Traditionally, spectral peak intensity or peak area is analyzed through LIBS data versus concentration of samples for quantitative analyses, which is the standard calibration curve method (SCC) [17]. Chemometric techniques are being used more widely in order to enhance analytical performance of LIBS [18]. Recent works have shown that multivariate analysis such as partial least square (PLS) and artificial neural network (ANN) give promising results for quantitative analysis [19,20,21,22]. These advanced techniques reduce the complexity of spectra and enable valuable information. In LIBS analysis, many fluctuating experimental parameters decrease the relation between elemental composition and LIBS intensity [23]. ANN provide a mathematical model from input data and gives information about unknown samples processing like a human neural network. It simulates the human intelligence for objective learning. ANN has been used for identification of polymers by LIBS [24] analysis of LIBS data for three chromium doped soils [25], and quantification of element in soils and rocks [26]. The other most commonly used chemometric method is PLS. It is a pattern recognition technique which can analyze bunch of spectral lines instead of a single specific line intensity as in standard calibration curve method. As a consequence, combination of chemometric methods and LIBS technique have given promising results for quantitative studies.
Na is an essential element for human diet. However, if consumed excessively, it may cause some health problems such as high blood pressure [27], strokes and coronary heart diseases [28]. Thus, sodium levels in food should be controlled. In a human diet, 70–75% of the total sodium chloride (NaCl) intake is obtained from processed foods, out of which cereal and cereal products constitute approximately 30% [29]. Therefore, NaCl content in bread, the most consumed food all over the world, should be lowered and adhered to Codex Alimentarus. Na content can be controlled by using standard methods such as flame atomic absorption spectrometry (AAS), titration and potentiometry [30, 31]. These methods are time consuming and complex due to their sample preparation process and their inconvenience for in situ and point detection analyses. Therefore, new, rapid and practical techniques are required.
LIBS has been used in several applications such as milk, bakery products, tea, vegetable oils, water, cereals, flour, potatoes, palm date and different types of meat [32]. Food supplements have been investigated to identify spectral signitures of minerals (Ca, Mg, C, P, Zn, Fe, Cu, and Cr) [33]. Quantitative analysis of NaCl in bakery products has been analyzed by standard calibration curve [34, 35]. For the present study, we performed a measurement of Na concentrations in bakery products by LIBS and conducted a direct comparison between standard calibration curve, ANN and PLS in terms of prediction accuracy and prediction precision. Combination of LIBS technique and PLS model is a promising method to perform routine analysis for Na measurements in bakery products. In this paper, three calibration methods (SCC, ANN, PLS) have been compared in bakery food applications for the first time.
LIBS experimental setup
LIBS spectra were recorded using a Quantel-Big Sky Nd:YAG-laser (Bozeman, MT, USA), HR2000 Ocean optics Spectrograph (Dunedin, FL, USA) and Stanford Research System Delay Generator SRS DG535 (Cleveland, OH, USA). Figure 1 shows the experimental setup. The excitation source was Q-switched Nd:YAG laser (Quantel, Centurion), operating at 532 nm with maximum energy of 18 mJ per pulse and approximately 9 ns (FWHM) pulse duration. Laser repetition rate is adjustable in the range of 1–100 Hz, but the experiment is performed at 1 Hz. The beam diameter at the exit was 3 mm with 5 mrad divergence. A 50 mm focal length lens was used to focus the beam size on to the pellet surface. Emitted plasma was collected with a pickup lens in 50 mm diameter and aligned at approximately 90 degree with respect to laser beam and then coupled to the fiber tip of the spectrometer. The distance between the pickup lens and the focal point of the laser beam is approximately 15 cm. In this work, HR2000 (Ocean Optics) spectrometer was used as the detection system with a resolution of approximately 0.5 nm in the 200–1100 nm range. 588.6 nm Na line was detected by gating the spectrometer 0.5 μs after the laser pulse and with a 20 μs gate width. All measurements were performed under ambient conditions and exposed to atmosphere. Samples were measured by the LIBS technique in triplicate, scanning five different locations and four excitations per location.
Schematic presentation of LIBS experimental setup
Bread flour and bread additive yeast were purchased from a local market. Nitric acid (HNO3) and NaCl were purchased from Sigma Aldrich (Steinheim, Germany). Standard bread samples were prepared in accordance with American Association of Cereal Chemists (AACC) Optimized Straight-Dough Bread-Baking Method No. 10–10.03 [33]. Twelve standard bread samples were produced by using this method at various salt concentrations ranged between 0.025 and 3.5%. The bread dough, comprising 100 g flour, 0.2 g bread additive, 25 ml of 8% yeast solution, 25 ml salt solution at various concentrations and 10 ml water, was kneaded by hand for 15 min. Dough pieces were rounded and incubated for 30 min during the first fermentation. 30 min later, the dough was punched and incubated for another 30 min during the second fermentation. After that, the dough was formed, placed into tins for the final fermentation and incubated for 55 min at 30 °C. Subsequently, the bread was baked for 30 min at 210 °C, taken out of the oven and cooled. Following this process,, bread samples were dried at 105 °C for 2 h and cooled in a desiccator to be used for the LIBS measurements. Then, 400 mg of dried powdered bread samples were shaped into a pellet under 10 t of pressure using a pellet press machine.
Na detection in bakery products by atomic absorption spectroscopy
Na content of standard bread samples and commercial samples were analyzed by atomic absorption spectroscopy (reference method for Na measurements). Samples were prepared based on the EPA Method 3051A through microwave-assisted digestion for atomic absorption spectroscopy measurements [36]. At the beginning, 0.3 g of the dried sample and 10 ml concentrated HNO3 were placed in a fluorocarbon polymer vessel. The samples were extracted by heating with a laboratory microwave unit. Next, the vessel was sealed and heated in the microwave unit. After cooling, the vessel contents were filtered with Whatman No. 1 filter paper and diluted in 100 ml of deionized water. The atomic absorption spectra for Na were recorded with the ATI-UNICAM 939 AA Spectrometer (Cambridge, UK) at 588.599 nm.
Data analyses were performed by SCC, PLS and ANN. Calibration and validation results were obtained and compared with each other. Performances of the models were evaluated according to coefficient of determination (R2), relative error of prediction (REP), and relative standard deviation (RSD). After that, LIBS spectra of commercial products were analyzed to examine the matrix effect. To compare the 3 methods, REP values were used to evaluate the prediction accuracy.
$$ REP\left(\%\right)=\frac{100}{N_v}{\sum}_{i=1}^{N_v}\left|\frac{{\hat{\mathrm{c}}}_i-{c}_i}{c_i}\right| $$
Nv = number of validation spectra.
ci = true concentration.
ĉi = predicted concentration.
In addition, we used RSD as a prediction precision indicator.
$$ RSD\left(\%\right)=\frac{100}{N_{conc}}{\sum}_{k=1}^{N_{conc}}\frac{\sigma_{c_k}}{C_k}\cdots \mathrm{with}\kern0.5em {\sigma}_{Ck^2}={\sum}_{i=1}^{\rho }{\frac{\left({\hat{c}}_{ik}-{c}_k\right)}{\rho -1}}^2 $$
Nconc = number of different concentrations in the validation set.
ρ = number of spectra per concentration.
σ = Standard deviation.
We first presented the quantitative results of LIBS data according to standard calibration method which is based on the measurement of Na atomic emission at 588.599 nm in standard bread samples. In this method, instrumental noise was subtracted from spectra. Then, background normalization was applied according to 575.522 nm where there is no atomic emission spectral line. Scanning five different locations and four excitations per location, we analyzed the samples by the LIBS technique in triplicate, for each sample (pellet) 20 shuts were accumulated. The calibration curves for the Na line at 588.599 nm were obtained by plotting its intensity (peak height) versus the Na concentrations in each sample. Twenty-six data (each of them includes 20 spectra) for calibration, 13 data for prediction of SCC method were used. Following that, LIBS spectra of commercial products were analyzed via SCC method, and the results were compared with measured Na concentrations by AAS.
In this study we used two different multivariate analysis methods. One of them is PLS in which we used the same data set as in previous work [34]. Data of LIBS spectra ranged between 538.424 nm and 800.881 nm were used instead of whole spectrum because the most quantitative data could be obtained from this region. LIBS data matrix was obtained by analyzing the spectra of 39 standard bread samples (26 samples for calibration, 13 samples for the validation) for PLS analysis. Data analysis was performed using PLS with Stand-alone Chemometrics Software (Version Solo 6.5 for Windows 7, Eigenvector Research Inc., Wenatchee, WA, USA). Data matrix of selected LIBS data and concentration was embedded into the software as calibration data, and PLS algorithm was performed using different components between 1 and 15. Mean center was applied as pre-processing to calibration input data. Prediction ability of obtained model was determined with the validation data set. Selection of latent variable's number related to the difference between cumulative variance and the prediction ability is very important. While cumulative variance increases with the latent variable number - which is 11 for this study-, prediction ability does not increase after obtaining the model. For this reason, it is important to find optimum approach between cumulative variance and prediction ability. In the PLS model, predictability was determined by calculating the root mean square error of calibration (RMSEC) and root mean square error of prediction (RMSEP) for the validation [37]. Minimum RMSEC and RMSEP values were selected for PLS model. After that, Na concentrations in commercial products were analyzed by PLS, and results were compared with results of AAS.
$$ RMSEC=\sqrt{\frac{\sum_{i=1}^m{\left( actual- calculated\right)}^2}{M-1}} $$
$$ RMSEP=\sqrt{\frac{\sum_{\dot{\mathrm{I}}=1}^n{\left( actual- calculated\right)}^2}{N}} $$
M: number of samples used in calibration data set.
N: number of samples used in prediction data set.
The other applied multivariate method is ANN. The same experimental data were used for quantitative analysis with The Neural Network Toolbox, MATLAB® Release 14 (The Mathworks, Natick, MA). The independent variable is the LIBS spectra between 538.424 nm and 800.881 nm, and dependent variable is the Na concentrations. Similar to the PLS method, 26 data set was used for calibration, 13 data set was used for validation of the trained network. We used the neural network functions of training for calibration and transfer functions of logsig and purelin for validation. Then, the number of nodes in hidden layer was optimized between 1 and 10, and it was found that seven hidden nodes showed the best performance.
The coefficient of determination (R2) value was considered for evaluating of the prediction capability of the method and choosing the network. Estimation value of ANN was determined by comparing the actual values and predicted values. After that, Na concentrations in commercial products were analyzed by ANN, and the results were compared with results of AAS.
For the calibration study, a total of 780 spectra (20 spectra for each pellet, 3 pellet for each sample, 13 standard bread sample) and for commercial products a total of 360 spectra (20 spectra for each pellet, 3 pellet for each sample, 13 standard bread sample) were recorded by LIBS. Figure 2. illustrates the LIBS spectra of different amount of NaCl containing standard bread samples. The peak at 588.599 nm belongs to Na and peak at 769.900 nm belongs to K according to NIST atomic data base [38]. Figure 2 shows that as the intensity of Na band at 588.599 nm increases, the NaCl levels in breads rises, as well.
LIBS spectra of standard bread samples at various salt concentrations
Calibration models were developed according to three different methods, which are SCC, ANN and PLS for quantitative treatments attained from a calibrated data set (standard bread samples). Then, prediction ability of the regression of obtained model was evaluated via validation data set (standard bread samples, excluding the calibration data set, were treated as unknown) to test the accuracy and precision of calibration model. After that, Na concentrations of commercial products were predicted and compared with results of standard method AAS. This treatment is found to be useful for evaluating the matrix effect and the potential of LIBS in commercial samples.
The traditional way to obtain calibration curve is using reference samples which contain constant concentrations of major element and varying concentrations of target element. For this purpose, standard bread samples were prepared at different salt concentrations and analyzed via LIBS. Standard calibration curve of Na was obtained by plotting its intensity at 588.599 nm versus the measured Na concentrations (Fig. 3a). Each point in calibration curve demonstrates the average value of 3 pellet samples, and each pellet contains accumulation of 20 laser shut. RSD, REP values and the other results were summarized for this calibration strategy in Table 1.
Calibration and validation plots developed with SCC (a), ANN (b), PLS (c) data analysis techniques
Table 1 Prediction of Na concentrations in standard bread samples with SCC, ANN and PLS
PLS calibration was performed using the same standard bread samples with known NaCl concentrations. The spectral interval of 538.424 nm to 800.881 nm was chosen because most of the atomic emission lines are in this region. For enhancing the performance of PLS method, mean center was applied as pre-processing to calibration input data. Formed PLS calibration model and validation data set are presented in Fig. 3c. Low RMSEC (0.01835) value and high coefficient of determination was chosen to develop the calibration model. Low RMSEC (0.01835) and RMSEP (0.10925) values were chosen for validation. The high coefficient of determination values, R2 = 0.999 for calibration and R2 = 0.991 for validation were observed (Fig. 3c). RSD and REP values of PLS is presented in Table 1.
In addition to the PLS method, ANN was also used for Na quantification in standard bread samples. Same calibration and validation data set were used for ANN model. The networks that had maximum R2 values between predicted and actual data were selected as the best trained network. Then, the best-trained network was used for prediction of Na content in standard breads with ANN. The predicted calibration and validation data sets were compared with experimental data sets and high correlations were obtained for Na concentrations (Fig. 3b). High coefficient of determination values, (R2 = 0.987) and (R2 = 0.964), were observed for calibration and validation data sets, respectively. REP and RSD values of ANN model is presented in Table 1.
When the PLS method was compared with standard calibration curve and ANN methods according to Table 1, the PLS method gave the best results with R2 values of 0.999 for calibration and 0.991 for validation. Furthermore, PLS has shown an excellent potential with high prediction precision and prediction accuracy compared to other methods.
To make comparison, Na concentrations in commercial samples such as biscuits, crackers and some kinds of breads were also analyzed with AAS. Comparative results between AAS and LIBS for commercial products analyzed with SCC, ANN and PLS models were presented in Fig. 4a, b, c and RSD, REP values were summarized in Table 2. SCC is the most commonly applied method for quantitative analyses because of its simplicity. However, this method is only useful if the standard sample's matrix resembles to the real sample's matrix (http://physics.nist.gov/PhysRefData/ASD/index.html).
Correlation between AAS and LIBS method for commercial products with SCC (a), ANN (b), PLS (c) data analysis techniques
Table 2 Prediction of Na concentration in commercial products with SCC, ANN and PLS
In addition experimental uncertanities may effect calibration curve. Low detection capabilities and matrix effects are negative factors for LIBS in SCC. Specific reference materials are prefered to obtain better calibration curves. However due to difficulties in obtaining proper reference materials, efficiency of SCC method is limited. Hence, this situation makes it more difficult to obtain the calibration set. Also in commercial products, PLS gave the best results with high prediction ability compared to other methods. Low RMSEC (0.29861) and RMSEP (0.13893) values were chosen for validation of commercial products. PLS model increased the R2 value from 0.788 to 0.943 for commercial products. When the standard calibration curve for standard bread samples is considered, it is clear that the relation between Na concentration and LIBS signal is linear. PLS generates new principal components based on response of measured concentration data. Linear regression of the new principal components help to control the variation response variable [39]. Hence PLS yields precise calibration. This tendency explains why PLS gave better results than ANN, which is more convenient for nonlinear models. Principal of ANN model is based on receiving a series of input data evaluated by each neuron; therefore, data are weighted dynamically. It compares the weighted sum of its inputs to a given threshold value and performs the nonlinear function to calculate the output [18]. On the other hand, the overall performance of PLS are quite satisfactory with a high prediction accuracy and precision compared to other methods. High RSD values can be explained with fluctuating of LIBS experimental parameters such as changing plasma conditions and spectral interference. These problems can be overcome by obtaining high numbers of shuts for each sample. Prediction ability of PLS model is more satisfactory for validation data of standard bread samples than validation data of commercial products. This is due to matrix differences between standard bread samples and commercial products. Application field of PLS has expanded to biomedical, pharmaceutical, social science and other fields [39,40,41,42], and it has recently shown great potential in LIBS applications [43].
Combining LIBS and PLS methods to measure Na concentrations gave acceptable results for also commercial products because the PLS as a multivariate analysis is more accurate, robust and reliable compared to SCC method. Limit of detection (LOD) was calculated as 0.0279%, and limit of quantification (LOQ) was calculated as 0.0930% for PLS. In some studies, authors obtained lower detection limits, as low as 5 ppm [44] and even 0.1 ppb by using dual-pulse and crossed-beam Nd:YAG lasers for Na on a water film [45]. However, our LOD and LOQ values are quite low for food products, which makes this method convenient for measurement of Na even in dietary food.
Na is an important ingredient in food products both for its potential to cause health problems such as heart diseases and stroke, and for its usage as a quality control parameter influencing taste, yeast activity, strength of the gluten network, and gas retention [46]. Thus, Na levels in food should be controlled in accordance with the recommendations. Measurement of Na in breads can be performed by titration, AAS and potentiometric methods. These regulatory methods are time consuming and require sample preparation. In contrast, LIBS can be a rapid and valuable tool for Na measurement in bakery products.
A comparative study between the standard calibration curve, ANN and PLS methods was conducted for measurement of NaCl in bakery products. Calibration data set was obtained by preparing the standard bread samples at various salt concentrations. Optimization was performed for each calibration method. According to the calibration results, PLS method gave the best results for validation curves and prediction of commercial samples. Experimental results showed that PLS method enhanced the performance of LIBS for quantitative analysis. Thanks to the PLS method LIBS can be a valuable tool for routine NaCl measurements in bakery products.
There is no data that needs to be shared.
Pandhija, S., Rai, N.K., Rai, A.K., Thakur, S.N.: Contaminant concentration in environmental samples using LIBS and CF-LIBS. Appl. Phys. B. Lasers. Opt. 98(1), 231–241 (2010). https://doi.org/10.1007/s00340-009-3763-x
St-Onge, L., Kwong, E., Sabsabi, M., Vadas, E.B., St-Onge, L., et al.: Rapid analysis of liquid formulations containing sodium chloride using laser-induced breakdown spectroscopy. J. Pharm. Biomed. Anal. 36(2), 277–284 (2004). https://doi.org/10.1016/j.jpba.2004.06.004
Bustamante, M.F., Rinaldi, C.A., Ferrero, J.C.: Laser induced breakdown spectroscopy characterization of ca in a soil depth profile. Spectrochim. Acta. Part. B. 57(2), 303–309 (2002). https://doi.org/10.1016/S0584-8547(01)00394-9
Maravelaki-Kalaitzaki, P., Anglos, D., Kilikoglou, V., Zafiropulos, V.: Compositional characterization of encrustation on marble with laser induced breakdown spectroscopy. Spectrochim. Acta. Part. B. 56(6), 887–903 (2001). https://doi.org/10.1016/S0584-8547(01)00226-9
Pandhija, S., Rai, A.K.: Laser-induced breakdown spectroscopy: a versatile tool for monitoring traces in materials. Pramana. 70(3), 553–563 (2008). https://doi.org/10.1007/s12043-008-0070-8
Gomba, J.M., D'Angelo, C., Bertuccelli, D., Bertuccelli, G.: Spectroscopic characterization of laser-induced breakdown in aluminum-lithium alloy samples for quantitative determination of traces. Spectrochim. Acta. Part. B. 56(6), 695–705 (2001). https://doi.org/10.1016/S0584-8547(01)00208-7
Lee, W.B., Wu, J.Y., Lee, Y.I., Sneddon, J.: Recent applications of laser-induced breakdown spectrometry: a review of material approaches. Appl. Spectrosc. Rev. 39(1), 27–97 (2004). https://doi.org/10.1081/ASR-120028868
Li, J., Lu, J., Lin, Z., Gong, S., Xie, C., Chang, L., Yang, L., Li, P.: Effects of experimental parameters on elemental analysis of coal by laserinduced breakdown spectroscopy. Opt. Laser Technol. 41(8), 907–913 (2009). https://doi.org/10.1016/j.optlastec.2009.03.003
Beldjilali, S., Borivent, D., Mercadier, L., Mothe, E., Clair, G., Hermann, J.: Evaluation of minor element concentrations in potatoes using laser-induced breakdown spectroscopy. Spectrochim. Acta. Part. B. 65(8), 727–733 (2010). https://doi.org/10.1016/j.sab.2010.04.015
Feng, J., Wang, Z., Li, Z., Ni, W.: Study to reduce laser-induced breakdown spectroscopy measurement uncertainty using plasma characteristic parameters. Spectrochim. Acta. Part. B. 65(7), 549–556 (2010). https://doi.org/10.1016/j.sab.2010.05.004
D. A. Rusak, , B. C. Castle, B. W.Smith, J. D. Winefordner, Fundamentals and applications of laser-induced breakdown spectroscopy, Crit. Rev. Anal. Chem., 27 (1997) 257–290, 4, DOI: https://doi.org/10.1080/10408349708050587
Sneddon, J., Lee, Y.-I.: Novel and recent applications of elemental determination by laser-induced breakdown spectrometry. Anal. Lett. 32(11), 2143–2162 (1999). https://doi.org/10.1080/00032719908542960
St-Onge, L., Kwong, E., Sabsabi, M., Vadas, E.B.: Quantitative analysis of pharmaceutical products by laser-induced breakdown spectroscopy. Spectrochim. Acta. Part. B. 57(7), 1131–1140 (2002). https://doi.org/10.1016/S0584-8547(02)00062-9
Tognoni, E., Palleschi, V., Corsi, M., Cristoforetti, G.: Quantitative micro-analysis by laser-induced breakdown spectroscopy: a review of the experimental approaches. Spectrochim. Acta. Part. B. 57(7), 1115–1130 (2002). https://doi.org/10.1016/S0584-8547(02)00053-8
Inakollua, P., Philipb, T., Raic, A.K., Yueha, F.-Y., Singh, J.P.: A comparative study of laser induced breakdown spectroscopy analysis for element concentrations in aluminum alloy using artificial neural networks and calibration methods. Spectrochim. Acta. Part. B. 64, 99–104 (2009)
Clegg, S.M., Sklute, E., Dyar, M.D., Barefield, J.E., Wiens, R.C.: Multivariate analysis of remote laser-induced breakdown spectroscopy spectra using partial least squares, principal component analysis, and related techniques. Spectrochim. Acta. Part. B. 64(1), 79–88 (2009). https://doi.org/10.1016/j.sab.2008.10.045
Salle, B., Lacour, J.-L., Mauchien, P., Fichet, P., Maurice, S., Manhès, G.: Comparative study of different methodologies for quantitative rock analysis by laser-induced breakdown spectroscopy in a simulated Martian atmosphere. Spectrochim. Acta. Part. B. 61(3), 301–313 (2006). https://doi.org/10.1016/j.sab.2006.02.003
Sirven, J.-B., Bousquet, B., Canioni, L., Sarger, L.: Laser-induced breakdown spectroscopy of composite samples: comparison of advanced Chemometrics methods. Anal. Chem. 78(5), 1462–1469 (2006). https://doi.org/10.1021/ac051721p
Sokullu, E., Palabıyık, I.M., Onur, F., Boyacı, I.H.: Chemometric methods for simultaneous quantification of lactic, malic and fumaric acids. Eng. Life Sci. 10(4), 297–303 (2010). https://doi.org/10.1002/elsc.200900080
Amador-Hernández, J., García-Ayuso, L.E., Fernandez-Romero, J.M., Luque de Castro, M.D.: Partial least squares regression for problem solving in precious metal analysis by laser induced breakdown spectrometry. J. Anal. At. Spectrom. 15(6), 587–593 (2000). https://doi.org/10.1039/B000813N
Kılıç, K., Bas, D., Boyacı, I.H.: An easy approach for the selection of optimal neural network structure. J. Food. 34(2), 73–81 (2009)
Yuab, K., Ren, J., Zhao, Y.: Principles, developments and applications of laser-induced breakdown spectroscopy in agriculture: a review. Artif. Intel. Agric. 4, 127–139 (2020)
Wang, Z., Feng, J., Li, L., Ni, W., Li, Z.: A multivariate model based on dominant factor for laser-induced breakdown spectroscopy measurements. J. Anal. At. Spectrom. 26(11), 2289–2299 (2011). https://doi.org/10.1039/c1ja10041f
Sattmann, R., Moench, I., Krause, H., Noll, R., Couris, S., Hatziapostolou, A., Mavromanolakis, A., Fotakis, C., Larrauri, E., Miguel, R.: Laser-induced breakdown spectroscopy for polymer identification. Appl. Spectrosc. 52(3), 456–461 (1998). https://doi.org/10.1366/0003702981943680
Sirven, J.-B., Bousquet, B., Canioni, L., Sarger, L., Tellier, S., Potin-Gantier, M., Le Hecho, I.: Qualitative and quantitative investigation of chromium-polluted soils by laser-induced breakdown spectroscopy combined with neural networks analysis. Anal. Bioanal. Chem. 385, 256 (2006)
Motto-Ros, V., Koujelev, A.S., Osinski, G.R., Dudelzak, A.E.: Quantitative multi-elemental laser-induced breakdown spectroscopy using artificial neural networks. J Eur Optical Soc. 3, 08011 (2008). https://doi.org/10.2971/jeos.2008.08011
Elliott, P., Stamler, J., Nichols, R., Dyer, A.R., Stamler, R., Kesteloot, H., Marmot, M.: Intersalt revisited: further analyses of 24 hour sodium excretion and blood pressure within and across populations. Br. Med. J. 312(7041), 1249–1253 (1996). https://doi.org/10.1136/bmj.312.7041.1249
Tuomilehto, J., Jousilahti, P., Rastenyte, D., Moltchanov, V., Tanskanen, A., Pietinen, P., Nissinen, A.: Urinary sodium excretion and cardiovascular mortality in Finland: a prospective study. Lancet. 357(9259), 848–851 (2001). https://doi.org/10.1016/S0140-6736(00)04199-4
FSAI (Food Safety Authority of Ireland). Salt and health: review of the scientific evidence and recommendations for public policy in Ireland, 2005. URL http://www.fsai.ie/uploadedFiles/Science_and_Health/salt_report-1.pdf. Accessed 28.09.2014
Capuano, E., Van der Veer, G., Verheijen, P.J.J., Heenan, S.P., Van de Laak, L.F.J., Koopmans, H.B.M., Van Ruth, S.M.: Comparison of a sodium-based and a chloride-based approach for the determination of sodium chloride content of processed foods in the Netherlands. J. Food Compos. Anal. 31(1), 129–136 (2013). https://doi.org/10.1016/j.jfca.2013.04.004
Smith, T., Haider, C.: Novel method for determination of sodium in foods by thermometric endpoint titrimetry (TET). J. Agric. Chem. Environ. 3(1B), 20–25 (2014)
Maria Markiewicz, K., Xavier Cama, M., Maria, G., Casado, P., Yash, D., Raquel Cama, M., Patrick, C., Carl, S.: Laser-induced breakdown spectroscopy (LIBS) for food analysis: a review. Trends Food Sci. Technol. 65, 80–93 (2017). https://doi.org/10.1016/j.tifs.2017.05.005
Agrawal, R., Kumar, R., Rai, S., Pathak, A.K., Rai, A.K., Rai, G.K.: LIBS: a quality control tool for food supplements. Food Biophysics. 6(4), 527–533 (2011). https://doi.org/10.1007/s11483-011-9235-y
Bilge, G., Boyacı, I.H., Eseller, K.E., Tamer, U.: Serhat Çakır, analysis of bakery products by laser-induced breakdown spectroscopy. Food Chem. 181, 186–190 (2015). https://doi.org/10.1016/j.foodchem.2015.02.090
Sezer, B., Bilge, G., Boyaci, I.H.: Capabilities and limitations of LIBS in food analysis. TrAC Trends Anal. Chem. 97, 345–353 (2017). https://doi.org/10.1016/j.trac.2017.10.003
AACCI (American Association of Cereal Chemists International). (2010). Approved Methods of Analysis. 11th Ed. AACCI: St. Paul. Methods 10–10.03
EPA Method 3051. (1994). Microwave assisted acid digestion of sediments, sludges, soils and oils
Uysal, R.S., Boyaci, I.H., Genis, H.E., Tamer, U.: Determination of butter adulteration with margarine using Raman spectroscopy. Food. Chem. 141(4), 4397–4403 (2013). https://doi.org/10.1016/j.foodchem.2013.06.061
Tripathi, M.M., Eseller, K.E., Yueh, F.-Y., Singh, J.P.: Multivariate calibration of spectra obtained by laser induced breakdown spectroscopy of plutonium oxide surrogate residues. Spectrochim. Acta Part B. 64(11-12), 1212–1218 (2009). https://doi.org/10.1016/j.sab.2009.09.003
Lengard, V., Kermit, M.: 3-way and 3-block PLS regressions in consumer preference analysis. Food Qual. Prefer. 17(3–4), 234–242 (2006). https://doi.org/10.1016/j.foodqual.2005.05.005
Krishnan, A., Williams, L.J., McIntosh, A.R., Abdi, H.: Partial least squares (PLS) methods for neuroimaging: a tutorial and review. NeuroImage. 56(2), 455–475 (2011). https://doi.org/10.1016/j.neuroimage.2010.07.034
Chiang, Y.-H.: Using a combined AHP and PLS path modelling on blog site evaluation in Taiwan. Comput. Hum. Behav. 29(4), 1325–1333 (2013). https://doi.org/10.1016/j.chb.2013.01.025
Ortiz, M.C., Sarabia, L., Jurado-Lopez, A., Luque de Castro, M.D.: Minimum value assured by a method to determine gold in alloys by using laser-induced breakdown spectroscopy and partial least-squares calibration model. Anal. Chim. Acta. 515(1), 151–157 (2004). https://doi.org/10.1016/j.aca.2004.01.003
Hussain, T., Gondal, M.A.: Laser induced breakdown spectroscopy (LIBS) as a rapid tool for material analysis. J. Phys. 439, 1–12 (2013)
Kuwako, A., Uchida, Y., Maeda, K.: Supersensitive detection of sodium in water with use of dual-pulse laser-induced breakdown spectroscopy. Appl. Opt. 42(50), 6052–6056 (2003). https://doi.org/10.1364/AO.42.006052
Lynch, E.J., Dal Bello, F., Sheehan, E.M., Cashman, K.D., Arendt, E.K., Lynch, E.J., et al.: Fundamental studies on the reduction of salt on dough and bread characteristics. Food Res. Int. 42(7), 885–891 (2009). https://doi.org/10.1016/j.foodres.2009.03.014
Furthermore, we gratefully thank Assist. Prof. Dr. Aysel Berkkan for performing atomic absorption spectroscopy analysis.
This research received no external funding.
Department of Food Engineering, Konya Food and Agriculture University, Meram, 42080, Konya, Turkey
Gonca Bilge
Department of Electrical and Electronics Engineering, Atilim University, 06836, Ankara, Turkey
Kemal Efe Eseller
Department of Physics, Ankara Hacı Bayram Veli University, 06900, Polatlı-Ankara, Turkey
Halil Berberoglu
Food Research Center, Hacettepe University, Beytepe, 06800, Ankara, Turkey
Banu Sezer & Ismail Hakki Boyaci
Department of Analytical Chemistry, Faculty of Pharmacy, Gazi University, 06330, Ankara, Turkey
Ugur Tamer
Banu Sezer
Ismail Hakki Boyaci
Data Analysis and sample preparation have been performed by GB and BS, System optical design and data processing has been done by KEE and HB. IHB and UT performed the validation of theoretical framework which this research based on, polished the text and brought the manuscript into its final form. The authors read and approved the final manuscript.
Correspondence to Kemal Efe Eseller.
Bilge, G., Eseller, K.E., Berberoglu, H. et al. Comparison of different calibration techniques of laser induced breakdown spectroscopy in bakery products: on NaCl measurement. J. Eur. Opt. Soc.-Rapid Publ. 17, 18 (2021). https://doi.org/10.1186/s41476-021-00164-9
Laser induced breakdown spectroscopy
Artificial neural network
Partial least square
|
CommonCrawl
|
Lipid production by the oleaginous yeast Yarrowia lipolytica using industrial by-products under different culture conditions
Magdalena Rakicka1,2,3,5 nAff3,
Zbigniew Lazar1,2,3,
Thierry Dulermo1,2,
Patrick Fickers4 &
Jean Marc Nicaud1,2,5
Microbial lipid production using renewable feedstock shows great promise for the biodiesel industry.
In this study, the ability of a lipid-engineered Yarrowia lipolytica strain JMY4086 to produce lipids using molasses and crude glycerol under different oxygenation conditions and at different inoculum densities was evaluated in fed-batch cultures. The greatest lipid content, 31% of CDW, was obtained using a low-density inoculum, a constant agitation rate of 800 rpm, and an oxygenation rate of 1.5 L/min. When the strain was cultured for 450 h in a chemostat containing a nitrogen-limited medium (dilution rate of 0.01 h−1; 250 g/L crude glycerol), volumetric lipid productivity was 0.43 g/L/h and biomass yield was 60 g CDW/L. The coefficient of lipid yield to glycerol consumption (Y L/gly) and the coefficient of lipid yield to biomass yield (Y L/X ) were equal to 0.1 and 0.4, respectively.
These results indicate that lipids may be produced using renewable feedstock, thus providing a means of decreasing the cost of biodiesel production. Furthermore, using molasses for biomass production and recycling glycerol from the biodiesel industry should allow biolipids to be sustainably produced.
The distinct possibility of fossil fuel depletion is currently forcing the fuel industry to develop alternative energy sources, such as biodiesel [1]. Because biodiesel is derived from vegetable oils, there is competition between biodiesel producers and food crop farmers for arable lands [2]. Consequently, one of the industry's goals is to find novel ways of producing biodiesel. One possible strategy involves the transformation of waste materials and/or co-products, such as whey, crop residues, crude glycerol, or crude fats, into triglycerides or fatty acids using microbial cell factories [3, 4]. These processes are advantageous compared to conventional methods, since they use waste materials generated by various industries as feedstock. Moreover, microbial lipid can be produced in close proximity to biodiesel industrial plants and it is easy to scale up their production [5].
Different bacteria, yeasts, algae, and fungi have the ability to convert carbohydrates and other substrates into intracellular lipid. When a microorganism's intracellular lipid accumulation levels are greater than 20% of cell dry weight (CDW), it is labeled an "oleaginous microorganism". Oleaginous microorganisms include yeast species, such as Rhodosporidium sp., Rhodotorula sp., Lipomyces sp., and Yarrowia lipolytica, whose intracellular lipid accumulation levels can reach 80% of CDW [6–8]. The main components of the accumulated lipid are triacylglycerols composed of long-chain fatty acids (16–18 carbon atoms in the chain) [6–8].
There are many ways of increasing intracellular lipid accumulation. Some involve metabolically engineering microbial strains to either improve their lipid storage capacities or synthesize lipids with specific fatty acid profiles [8–11]. Others focus on refining the production process by identifying optimal culture conditions and defining optimal medium composition [12–14]. For instance, fed-batch culturing is the most convenient system in pilot experiments seeking to establish optimal production conditions: it helps identify the best medium composition and any supplements needed. However, continuous cultures are also of great interest when the goal is to enhance lipid accumulation levels, especially those of yeast grown as well-dispersed, non-filamentous cells [15].
Due to Y. lipolytica's unique physiological characteristics (i.e., its ability to metabolize hydrophobic substrates such as alkanes, fatty acids, and lipids), its ability to accumulate high levels of lipids, and its suite of efficient genetic tools [16], this yeast is a model organism for biolipid production and it is thought to have great applied potential [6–8, 11], both in the production of typical biofuel lipids [9–11] and oils with unusual fatty acid profiles or polyunsaturated fatty acids [3, 4, 17]. In this study, Y. lipolytica JMY4086, a strain with an improved lipid accumulation capacity, was used to exploit unpurified, low-cost industrial by-products, such as sugar beet molasses and the crude glycerol produced by the biodiesel industry and lipid production under different culture conditions was quantified. Molasses was used as a source of carbon, minerals, and vitamins, which are crucial for fermentation [18]. Moreover, molasses is used as the main substrate in the production of baker's yeast, organic acids, amino acids, and acetone/butanol [15]. In yeast, the glycolytic pathway produces intermediate compounds from glycerol either via the phosphorylation pathway [19, 20] or the oxidative pathway (dehydrogenation of glycerol and the subsequent phosphorylation of the reaction product) [21]. Dihydroxyacetone phosphate, the product of these reactions, can subsequently be converted into citric acid, storage lipids, or various other products [22, 23]. Additionally, glycerol may be readily incorporated in the core of triglycerides, which are stored in lipid bodies along with steryl esters [10].
The aim of this study was to produce valuable information that could be used in future research examining the biotransformation of crude glycerol into triglycerides (TAGs) with a view to producing biolipids, also known as single-cell oils (SCOs). This process may serve as an alternative means of decreasing biodiesel production costs while simultaneously recycling glycerol.
Previous work has found that Y. lipolytica JMY4086 can produce biolipids from substrates such as pure glucose, fructose, and sucrose in batch bioreactors [19]. The present study investigated whether low-cost raw materials such as molasses and crude glycerol could also serve as substrates for biolipid production and accumulation; the substrate concentration, oxygenation conditions, and inoculum densities were varied. Compared to other oleaginous microorganisms, Y. lipolytica has the unique ability to accumulate lipids when nitrogen is limited and to remobilize them when carbon is limited [24]. Therefore, all culturing was performed under low nitrogen conditions. Furthermore, TAG remobilization was avoided because the TGL4 gene, which encodes triglyceride lipase YlTgl4, was deleted from JMY4086 [17].
Fed-batch cultures subject to different oxygenation conditions and initiated with different inoculum densities
Studies examining lipid production by Y. lipolytica using fed-batch or repeated-batch cultures are scarce. Moreover, only glycerol has been used as a substrate for cell growth and lipid synthesis [25]. This study utilized a two-step process: biomass was produced using molasses for 48 h, and then lipids were produced using glycerol as the main carbon source. Biomass yield and lipid production were analyzed at two different inoculum densities (low density and high density) and under two sets of oxygenation conditions (unregulated and regulated). In the unregulated strategy "Oxy-const", dissolved oxygen (DO) was not regulated; in the regulated strategy "Oxy-regul", DO was regulated at 50% saturation (see "Methods").
When Oxy-const strategy and low-density inoculum were used, the biomass reached 50 g CDW/L and citric acid production was 36.8 g/L after 55 h of culture (Figure 1a, b). During the glycerol-feeding phase, cells converted the citric acid produced into lipids. In those conditions, total lipid concentration increased from 11 to 15.5 g/L (Figure 1c). Yeast lipid content reached 31% of CDW, which corresponds to a volumetric lipid productivity (Q L ) of 0.18 g/L/h and a coefficient of lipid yield to glycerol consumption (Y L/gly) of 0.083 g/g (Table 1). This condition also produced a small amount of mycelial cells (Figure 2a). Indeed, low DO levels have been shown to induce the yeast-to-mycelium transition in Y. lipolytica. Bellou and colleagues demonstrated that mycelial and/or pseudomycelial forms predominated over the yeast form when DO was low, regardless of the carbon and nitrogen sources used [26].
Effect of oxygenation conditions and inoculation densities on growth, citric acid production, and lipid production by Y. lipolytica grown in molasses. Strain JMY4086 was grown in a molasses medium and fed with crude glycerol. Growth is expressed as a cell dry weight, b citric acid production, and c lipid production. X biomass, CA citric acid, L lipids, 1 low-density inoculum/unregulated oxygenation condition (filled circles), 2 low-density inoculum/regulated oxygenation condition (filled squares), 3 high-density inoculum/unregulated oxygenation condition (filled triangles), 4 high-density inoculum/regulated oxygenation condition (filled diagonals). The low-density and high-density inocula had optical densities of OD600 = 1 and OD600 = 6, respectively. For the unregulated oxygenation condition, stirring speed was a constant 800 rpm and the aeration rate was 1.5 L/min. For the regulated oxygenation condition, dissolved oxygen was maintained at 50% saturation and the aeration rate was 0–3.5 L/min. All the results presented are the mean values ± SD for two independent biological replicates.
Table 1 Lipid production by Y. lipolytica JMY4086 during the glycerol-feeding phase for different oxygenation conditions and inoculation densities
Visualization of JMY4086 cell morphology and lipid bodies at the end of the fed-batch culturing experiment. Images are of cultures from the a low-density inoculum/unregulated oxygenation condition, b low-density inoculum/regulated oxygenation condition, c high-density inoculum/unregulated oxygenation condition; and d high-density inoculum/regulated oxygenation condition. The lipid bodies were stained with Bodipy®.
When oxygenation was regulated and a low-density inoculum was used, cell growth was surprisingly very slow (r x = 0.24 gCDW/h) and the culture duration (the time required for complete glycerol consumption) was 190 h. Consequently, when crude glycerol was fed into the bioreactor at 48 h, the fructose concentration was still high (30 g/L). In this condition, the biomass yield was lower (40 g/L) because citric acid production was greater (50 g/L) (Figure 1a, b); there was an apparent trade-off between the two processes. The citric acid produced was never reconsumed. The total lipid content was very low, 7 g/L, which corresponds to a Q L of 0.04 g/L/h (Figure 1c; Table 1). However, Y L/gly and Y L/X were equal to 0.056 and 0.17 g/g, respectively (Table 1). In these conditions, JMY4086 formed short true mycelia and pseudomycelia (Figure 2b).
High-density inocula were also used under both regulated and unregulated oxygenation conditions. As shown in Figure 1, the lag phase became shorter and culture duration decreased significantly; the latter was 70 and 66 h under regulated and unregulated conditions, respectively. When oxygenation was unregulated, the biomass yield was 58 g/L; it reached 70 g/L when oxygenation was regulated (Figure 1a). Citric acid production was similar across the two conditions (19 and 23 g/L, respectively); however, it was only reconsumed when DO was regulated (Figure 1b). In both cases, compared to the unregulated/low-density condition, Q L was low, as were Y L/gly and Y L/X (Table 1). Furthermore, in both conditions, JMY4086 formed short true mycelia and pseudomycelia (Figure 2c, d).
Because the fed-batch culture initiated with a low-density inoculum and subject to unregulated oxygenation had the highest lipid production, these conditions were used in a second experiment, in which a higher airflow rate of 3.5 L/min (the high-oxygen condition, "Oxy-high") was utilized. As a consequence, the lag phase lengthened, sucrose hydrolysis began later—after 30 h (Figure 3)—and lipid accumulation was limited. Citric acid production exceeded 40 g/L, and the compound was not reconsumed (Figure 3). The biomass yield was 59 g/L, and final lipid content was 7.7 g/L, which corresponds to a Y L/gly of 0.077 g/g (Table 1). These results indicate that increasing the oxygenation rate did not improve yeast growth and lipid production.
Time course of carbon sources concentration, biomass yield, and lipid and citric acid production during culture of Y. lipolytica JMY4086 in the low-density inoculum/high-oxygen experimental conditions. Sucrose (SUC), glucose (GLU), fructose (FRU), glycerol (GLY), biomass yield (X), lipid (L), and citric acid (CA). For the high-oxygen condition, the stirring speed was a constant 800 rpm, and the agitation rate was maintained at 3.5 L/min.
The fed-batch experiments revealed that the highest Q L , Y L/gly, and Y L/X values were obtained using a low-density inoculum and unregulated oxygenation. Consequently, these conditions were used in the continuous culture pilot experiment.
Continuous culture experiment: effects of increasing concentrations of glycerol in the feeding medium
Little research has looked at the synthesis of biolipids from sugars or renewable feedstock by nitrogen-limited continuous cultures [27–29]. Papanikolaou and Aggelis conducted the only study to date to examine biolipid synthesis by Y. lipolytica under continuous culture conditions using glycerol as the sole substrate [15].
In this experiment, we used a stepwise continuous fed-batch (SCFB) approach to test the effect of glycerol concentration on lipid production. The cultures started as batch cultures that were grown with molasses to produce biomass; once the sugar supply was exhausted, continuous culturing was initiated. Glycerol was used as feed, and the dilution rate was 0.01 h−1. The glycerol concentration in the feeding medium was increased from 100 to 450 g/L in steps that took place every 100 h (Figure 4). It has been shown that the dilution rate and culture-medium C:N ratio strongly affect lipid accumulation [28, 30]. In general, dilution rates of less than 0.06 h−1 have been shown to maximize lipid production in continuous cultures across different yeasts [31]. However, a higher dilution rate, of about 0.01 h−1, optimizes Y. lipolytica's production of citric acid from glycerol [32]. Therefore, in this experiment, a dilution rate of 0.01 h−1 was used. In addition, for JMY4086, lipid accumulation levels were similar across a range of C:N ratios, from 60:1 to 120:1 [19]. Consequently, to maximize cell growth and prevent nitrogen starvation, SCFB culturing was performed using a C:N ratio of 60:1.
Biomass (X), lipid (L), and citric acid (CA) production during SCFB culture of Y. lipolytica JMY4086. All the results presented are the mean values ± SD for two independent replicates. The black line (GLY) without symbol represents the glycerol concentration in the feeding solution.
Biomass yield and lipid production depended on the glycerol concentration in the feed solution (Figure 4). For glycerol concentrations of 100 g/L, the biomass yield was 32.2 g CDW/L; it reached 67.4 g CDW/L at the highest glycerol concentrations (450 g/L; Table 2). Under such conditions, DO was not limiting, except after 600 h of culture (data not shown). Glycerol was never detected in the culture broth, except at higher feeding concentrations (450 g/L), where glycerol accumulated in the culture broth at a concentration of 0.5 g/L. By comparison, Meesters et al. [33] observed that, in Cryptococcus curvatus, cell growth was restricted during lipid accumulation when glycerol concentrations were higher than 64 g/L.
Table 2 Characteristics of lipid production in continuous cultures of Y. lipolytica JMY4086 grown in crude glycerol; SCFB and chemostat culturing were used
The highest Y L/gly value was obtained at a glycerol concentration of 100 g/L. However, since biomass yield was lowest at that concentration, Q L was also low (0.09 g/L/h). In contrast, when the glycerol concentration was 250 g/L, Q L and Y L/X were 0.31 g/L/h and 0.46, respectively. At higher glycerol concentrations (350 and 450 g/L), both Y L/gly and Y L/X were lower (Table 2). During SCFB culturing, very low concentrations of citric acid were present until 400 h. Then, as the glycerol concentration increased to 350 g/L, citric acid started to accumulate; it reached a concentration of 40 g/L (Table 2). This accumulation of citric acid may have resulted from nitrogen limitations or a transition in cell morphology. Indeed, cells occurred in yeast form up until 400 h, at which point they started to filament, forming true mycelia and pseudomycelia (data not shown).
Lipid production from glycerol in a chemostat culture
The SCFB culturing experiment showed that feed glycerol concentrations of 250 g/L yielded the highest Q L and Y L/X values. In addition, citric acid and glycerol did not accumulate under those conditions; DO levels exceeded 70% saturation and no mycelia were observed. To assess lipid production in continuous cultures, yeasts were grown in a chemostat for over 400 h using a dilution rate of 0.01 h−1. To start off, yeasts were batch cultured for 48 h using molasses as the primary carbon source. Then, chemostat culturing was used; yeasts were kept in a medium with a glycerol concentration of 100 g/L for 100 h. They were then given a medium with a glycerol concentration of 250 g/L (Figure 5). At a steady state, between 200 h and 500 h of culture, the biomass yield was 59.8 g/L. The yeast produced 24.2 g/L of lipids; Q L was 0.43 g/L/h; and Y L/gly and Y L/X were 0.1 and 0.4, respectively (Table 2). Under these conditions, citric acid production was 50 g/L (Figure 5; Table 2). In contrast to the SCFB experiments, citric acid was produced from the start in the chemostat culture. One hypothesis explaining this difference between the two culture methods is that, in the chemostat, DO was limited. During SCFB culture, DO was not a limiting factor until 600 h into the experiment, when citric acid began to be secreted into the culture broth. However, in the chemostat culture, when the glycerol concentration in the feeding medium was increased to 250 g/L, DO dramatically decreased. DO limitations resulted in citric acid secretion, but not in cell filamentation. The cell morphology was constant; indeed, during the whole culturing process, cells remained in yeast form (Figure 6). Fatty acid profiles were similar across the three types of cultures (fed-batch, SCFB, and chemostat; Table 3). The yeast produced mainly C16 and C18 long-chain fatty acids, as do other oleaginous yeasts [25, 32]. In general, differences in fatty acid profiles seem to result not from culture type, but from substrate type. When industrial fats have been used as carbon sources, yeast demonstrates a different total fatty acid composition, which is characterized by high levels of cellular stearic acid [3].
Biomass (X), lipid (L), and citric acid (CA) production during the chemostat culture of JMY4086 when 250 g/L of crude glycerol was present in the feeding medium. All the results presented are the mean values ± SD for two independent replicates.
Cell morphology of JMY4086 when the strain was continuously cultured in crude glycerol: a at 200 h and b at 400 h. The white squares show a representative cell that has been enlarged (×2).
Table 3 Fatty acid profiles for Y. lipolytica JMY4086 under different culture conditions
The results of this study represent a good starting point for research seeking to further optimize chemostat culturing conditions. We found that the concentration of dissolved oxygen is one of the most important factors affecting lipid production. Oxygen limitation can be rate limiting in carbon metabolism and results in citric acid secretion. Additionally, optimizing nitrogen levels in the medium is also an important means by which citric acid secretion can be restricted. However, increasing nitrogen concentration can also increase biomass production, which in turn can result in problems with the oxygenation and stirring of the medium. It is therefore important to balance the regulation of available nitrogen with the optimization of the dilution rate to avoid generating overly high biomass concentrations in the bioreactor. All of these parameters should be used to find the right equilibrium between biomass and lipid production, the goal being to maximize total lipid production using chemostat culturing.
In conclusion, the results obtained in this study clearly show that the continuous culture method is an interesting means of producing lipids. Overall lipid production in the continuous culture experiment was almost 2.3 times higher than that in the fed-batch culture experiment. Y. lipolytica JMY4086 produced 24.2 g/L of lipids; the coefficient of lipid yield to glycerol consumption (Y L/gly) was 0.1 g/g and volumetric lipid productivity (Q L ) was 0.43 g/L/h. In the fed-batch cultures, lipid concentrations never exceeded 15.5 g/L, which corresponded to a Y L/gly of 0.083 g/g and a Q L of 0.18 g/L/h.
Bioengineered Y. lipolytica strain JMY4086 shows promise in the development of industrial biodiesel production processes. Indeed, in this strain, the inhibition of the degradation and remobilization pathways (via the deletion of the six POX genes and the TGL4 gene, respectively) was combined with the boosting of lipid synthesis pathways (via overexpression of DGA2 and GPD1). Additionally, molasses is an excellent substrate for biomass production, because it is cheap and contains several other compounds that are crucial for the fermentation processes. However, its concentration must be controlled because it also contains compounds that inhibit Y. lipolytica growth. Moreover, the subsequent addition of glycerol did not delay cell growth. This study provided valuable foundational knowledge that can be used in future studies to further optimize lipid production in fed-batch and continuous cultures in which biomass production takes place in molasses and lipid production takes place in industrial glycerol-based medium.
The Y. lipolytica strain used in this study, JMY4086 [17], was obtained by deleting the POX1–6 genes (POX1–POX6) that encode acyl-coenzyme A oxidases and the TGL4 gene, which encodes an intracellular triglyceride lipase. The aim was to block the β-oxidation pathway and inhibit TAG remobilization, respectively. In addition, to push and pull TAG biosynthesis, YlDGA2 and YlGPD1, which encode the major acyl-CoA:diacylglycerol acyltransferase and glycerol-3-phosphate dehydrogenase, respectively, were constitutively overexpressed. Additionally, the S. cerevisiae invertase SUC2 and Y. lipolytica hexokinase HXK1 genes were overexpressed to allow the strain to grow in molasses.
Medium and culturing conditions
The YPD medium contained Bacto™ Peptone (20 g/L, Difco, Paris, France), yeast extract (10 g/L, Difco, Paris, France), and glucose (20 g/L, Merck, Fontenay-sous-Bois, France). The medium for the batch cultures contained molasses (245 g/L, sucrose content of 600 g/L, Lesaffre, Rangueil, France), NH4Cl (4.0 g/L), KH2PO4 (0.5 g/L), MgCl2 (1.0 g/L), and YNB (without amino acids and ammonium sulfate, 1.5 g/L, Difco). For fed-batch cultures, crude glycerol (96% w/v, Novance, Venette, France) was added after 48 h at a feeding rate of 8.8 g/h until a total of 100 g/L of glycerol had been delivered (C:N ratio of 100:1). In the stepwise continuous fed-batch (SCFB) cultures, a C:N ratio of 60:1 was maintained as glycerol concentrations increased (100, 200, 250, 350, and 450 g/L); NH4Cl ranged from 4 to 12.5 g/L. Chemostat cultures were grown in either crude glycerol (100 g/L)/NH4Cl (2.5 g/L), with a C:N ratio of 25, or glycerol (250 g/L)/NH4Cl (6,25 g/l), with a C:N ratio of 40. Toward the beginning of the culturing process (at 100 h), the concentration of glycerol in the feeding medium was 100 g/L; it was subsequently increased to 250 g/L. This approach was used because past observations had suggested that slowly increasing the concentration of the carbon source in the feeding medium results in greater oxygenation of the culture and higher lipid production. Additionally, the fact that the carbon source was present in high concentrations from the beginning of the culturing process resulted in strong cell filamentation and lower final lipid yields (data not shown). For the stepwise continuous fed-batch and chemostat cultures, the dilution rate was 0.01 h−1 and the working volume was maintained at 1.5 L. All culturing took place in a 5-L stirred tank reactor (Biostat B-plus, Sartorius, Germany). The temperature was controlled at 28°C and the pH was kept at 3.5 by adding 40% (W/V) NaOH. We used three oxygenation conditions in our experiments: unregulated dissolved oxygen (DO), "Oxy-const", regulated DO, "Oxy-regul", and high DO, "Oxy-high" conditions. For the unregulated condition, the airflow rate was 1.5 L/min and the stirring speed was 800 rpm. In the regulated condition, DO was maintained at 50% saturation by a PID controller (the airflow rate ranged between 0 and 3.5 L/min, and stirring speed ranged between 200 and 1,000 rpm). In the oxy-high condition, the airflow rate was 3.5 L/min and the stirring speed was 800 rpm. Bioreactors were inoculated using samples with an initial OD600 nm of 0.15 (low-density inoculum) or of 0.8 (high-density inoculum). Precultures were grown in YPD medium. The bioreactor containing a given medium (prepared with tap water) was sterilized in an autoclave at 121°C for 20 min. We conducted two biological replicates of all fed-batch cultures, for which means and standard deviations were calculated. A single replicate was performed for the SCFB and the chemostat culture.
Quantifying dry biomass
Ten milliliters of culture broth was centrifuged for 5 min at 13,000 rpm. The cell pellet was washed with distilled water and filtered on membranes with a pore size of 0.45 μm. The biomass yield was determined gravimetrically after samples were dried at 105°C. It was expressed in grams of cell dry weight per liter (gCDW/L).
Measuring sugar and citric acid concentrations
The concentrations of glycerol (GLY), sucrose (SUC), glucose (GLU), fructose (FRU), and citric acid (CA) were measured in the culture supernatants by HPLC (Dionex-Thermo Fisher Scientific, UK) using an Aminex HPX-87H column (Bio-Rad, Hercules, CA, USA) coupled with a refractive index (RI) detector (Shodex, Ogimachi, Japan). The column was eluted with 0.1 N sulfuric acid at 65°C at a flow rate of 0.6 ml min−1.
Images were obtained using a Zeiss Axio Imager M2 microscope (Zeiss, Le Pecq, France) with a 100× objective lens and Zeiss filter sets 45 and 46 for fluorescence microscopy. Axiovision 4.8 software (Zeiss, LePecq, France) was used for image acquisition. To make the lipid bodies visible, BodiPy® Lipid Probe (2.5 mg/mL in ethanol; Invitrogen) was added to the cell suspension (OD600 = 5) and the samples were incubated for 10 min at room temperature.
Quantifying lipid levels
The fatty acids (FAs) in 15-mg aliquots of freeze-dried cells were converted into methyl esters using the method described in Browse et al. [34, 30]. FA methyl esters were analyzed by gas chromatography (GC) on a Varian 3900 equipped with a flame ionization detector and a Varian Factor Four vf-23 ms column, for which the bleed specification at 260°C was 3 pA (30 m, 0.25 mm, 0.25 μm). FAs were identified by comparing their GC patterns to those of commercial FA methyl ester standards (FAME32; Supelco) and quantified using the internal standard method, which involved the addition of 50 mg of commercial C17:0 (Sigma). Total lipid extractions were obtained from 100-mg samples (expressed in terms of CDW, as per Folch et al. [16]). Briefly, yeast cells were spun down, washed with water, freeze dried, and then resuspended in a 2:1 chloroform/methanol solution and vortexed with glass beads for 20 min. The organic phase was collected and washed with 0.4 mL of 0.9% NaCl solution before being dried at 60°C overnight and weighed to quantify lipid production.
Volumetric lipid productivity (Q L ) was defined using Eqs. (1–3):
$$Q_{\text{L}} = \frac{{{\text{Lipid}}_{\text{acc}} + {\text{Lipid}}_{\text{out}} }}{V \Delta t},$$
$${\text{Lipid}}_{\text{acc}} = \left[ {\text{Lipid}} \right] . X,$$
$${\text{Lipid}}_{\text{out}} = \frac{{\Delta \left( {{\text{Lipid}}_{\text{acc}} } \right)}}{F \Delta t}.$$
The coefficient of lipid yield to glycerol consumption (Y L/gly) was defined using Eqs. (4–7):
$$Y_{{L/{\text{gly}}}} = \frac{{{\text{Lipid}}_{\text{acc}} + {\text{Lipid}}_{\text{out}} }}{{{\text{Gly}}_{\text{in}} - ({\text{Gly}}_{\text{acc}} + {\text{Gly}}_{\text{out}} )}},$$
$${\text{Gly}}_{\text{in}} = \left[ {\text{Gly}} \right] . F. \Delta t,$$
$${\text{Gly}}_{\text{Acc}} = \Delta \left[ {{\text{Gly}}_{\text{med}} } \right] . V,$$
$${\text{Gly}}_{\text{out}} = \frac{{\Delta \left( {{\text{Gly}}_{\text{acc}} } \right)}}{F \Delta t}.$$
The coefficient of lipid yield to biomass yield (Y L/X ) was defined using Eqs. (8–10):
$$Y_{L/X} = \frac{{{\text{Lipid}}_{\text{acc}} + {\text{Lipid}}_{\text{out}} }}{{X_{\text{in}} + X_{\text{out}} }},$$
$$X_{\text{in}} = \Delta \left[ X \right] . V,$$
$$X_{\text{out}} = \frac{{\Delta \left( {X_{\text{in}} } \right)}}{F \Delta t}.$$
In the above equations, Lipidacc is the lipid accumulated in the cells in the bioreactor (g); Lipidout the lipid accumulated in the cells drawn off from the bioreactor (g); Δ(lipidacc) the difference in lipidacc for time period Δt; Glyin the glycerol fed to the bioreactor (g); Glyacc the glycerol accumulated in the bioreactor (g); Glyout the glycerol drawn off from the bioreactor (g); Δ(Glyacc) the difference in Glyacc for the time period Δt; V the volume of the culture (L); Δt the duration between two measurements (h); X the biomass yield (gCDW/L); [Gly] the glycerol concentration in the feeding medium (g/L); [Glymed] the glycerol concentration in the bioreactor (g/L); [Lipid] the lipid concentration (g/CDW); [X in] the cell concentration in the bioreactor (gCDW/L); [X out] the cell concentration in the culture broth drawn off from the bioreactor (gCDW/L); F the flow rate of the feeding medium (L/h).
Y L/gly :
coefficient of lipid yield to glycerol consumption
Y L/X :
coefficient of lipid yield to biomass yield
Q L :
volumetric lipid productivity
biomass yield
CDW:
cell dry weight
SCFB:
stepwise continuous fed batch
GLY:
L :
intracellular lipid
D :
dilution rate
rotations per minute
Tai M, Stephanopoulos G (2013) Engineering the push and pull of lipid biosynthesis in oleaginous yeast Yarrowia lipolytica for biofuel production. Metab Eng 15:1–9
Hill J, Nelson E, Tilman D, Polasky S, Tiffany D (2006) Environmental, economic, and energetic costs and benefits of biodiesel and ethanol biofuels. Proc Natl Acad Sci USA 103:11206–11210
Papanikolaou S, Chevalot I, Komaitis M, Aggelis G, Marc I (2001) Kinetic profile of the cellular lipid composition in an oleaginous Yarrowia lipolytica capable of producing a cocoabutter substitute from industrial fats. Antonie Van Leeuwenhoek 80:215–224
Papanikolaou S, Chevalot I, Komaitis M, Marc I, Aggelis G (2002) Single cell oil production by Yarrowia lipolytica growing on an industrial derivative of animal fat in batch cultures. Appl Microbiol Biotechnol 58:308–312
Meher LC, Sagar DV, Naik SN (2006) Technical aspects of biodiesel production by transesterification—a review. Renew Sust Energy Rev 10:248–268
Beopoulos A, Cescut J, Haddouche R, Uribelarrea JL, Molina-Jouve C, Nicaud JM (2009) Yarrowia lipolytica as a model for bio-oil production. Prog Lipid Res 48:375–387
Beopoulos A, Nicaud JM (2012) Yeast: a new oil producer. Ol Corps Gras Lipides OCL 19:22–88. doi:10.1684/ocl.2012.0426
Thevenieau F, Nicaud J-M (2013) Microorganisms as sources of oils. Ol Corps Gras Lipides OCL 20(6):D603
Mliĉková K, Luo Y, Andrea S, Peĉ P, Chardot T, Nicaud JM (2004) Acyl-CoA oxidase, a key step for lipid accumulation in the yeast Yarrowia lipolytica. J Mol Catal B Enzym 28:81–85
Beopoulos A, Mrozova Z, Thevenieau F, Dall MT, Hapala I, Papanikolaou S et al (2008) Control of lipid accumulation in the yeast Yarrowia lipolytica. Appl Environ Microbiol 74:7779–7789
Blazeck J, Hill A, Liu L, Knight R, Miller J, Pan A et al (2014) Harnessing Yarrowia lipolytica lipogenesis to create a platform for lipid and biofuel production. Nat Commun 5:3131. doi:10.1038/ncomms4131
Li Y, Zhao ZK, Bai F (2007) High-density cultivation of oleaginous yeast Rhodosporidium toruloides Y4 in fed-batch culture. Enzyme Microb Tech 41:312–317
Zhao X, Kong X, Hua Y, Feng B, Zhao ZK (2008) Medium optimization for lipid production through co-fermentation of glucose and xylose by the oleaginous yeast Lipomyces starkeyi. Eur J Lipid Sci Technol 110:405–412
Meesters PAEP, Huijberts GNM, Eggink G (1996) High-cell-density cultivation of the lipid accumulating yeast Cryptococcus curvatus using glycerol as a carbon source. App Microbiol Biotechnol 45:575–579
Papanikolaou S, Aggelis G (2002) Lipid production by Yarrowia lipolytica growing on industrial glycerol in a single-stage continuous culture. Bioresour Technol 82:43–49
Barth G, Gaillardin C (1996) Yarrowia lipolytica. In: Wolf K (ed) Nonconventional yeasts in biotechnology. Springer-Verlag, Berlin, Heidelberg, New York, pp 313–388
Xie D, Jackson EN (2015) Zhu Q Sustainable source of omega-3 eicosapentaenoic acid from metabolically engineered Yarrowia lipolytica: from fundamental research to commercial production. Appl Microbiol Biotechnol 99:1599–1610
Folch J, Lees M, Sloane-Stanley GH (1957) A simple method for the isolation and purification of total lipids from animal tissues. J Biol Chem 226:497–509
Lazar Z, Dulermo T, Neuvéglise C, Crutz-LeCoq A-M, Nicaud J-M (2014) Hexokinase—a limiting factor in lipid production from fructose in Yarrowia lipolytica. Metab Eng 26:89–99
Joshi S, Bharucha C, Jha S, Yadav S, Nerurkar A, Desa AJ (2008) Biosurfactant production using molasses and whey under thermophilic conditions. Bioresour Technol 99:195–199
Makkar RS, Swaranjit S, Cameotra SS (1997) Utilization of molasses for biosurfactant production by two Bacillus strains at thermophilic conditions. J Am Oil Chem Soc 74:887–889
Babel W, Hofmann KH (1981) The conversion of triosephosphate via methylglyoxal, a bypass to the glycolytic sequence in methylotrophic yeasts? FEMS Microbiol Lett 10:133–136
Ermakova IT, Morgunov IG (1987) Pathways of glycerol metabolism in Yarrowia (Candida) lipolytica yeasts. Mikrobiology 57:533–537
May JW, Sloan J (1981) Glycerol utilization by Schizosaccharomyces pombe: dehydrogenation as the initial step. J Gen Microbiol 123:183–185
Makri A, Fakas S, Aggelis G (2010) Metabolic activities of biotechnological interest in Yarrowia lipolytica grown on glycerol in repeated batch cultures. Bioresour Technol 101:2351–2358
Bellou S, Makri A, Triantaphyllidou I-E, Papanikolaou S, Aggelis G (2014) Morphological and metabolic shifts of Yarrowia lipolytica induced by alteration of the dissolved oxygen concentration in the growth environment. Microbiology 160:807–817
Ykema A, Verbree EC, Verseveld HW, Smit H (1986) Mathematical modeling of lipid production by oleaginous yeast in continuous cultures. Antoinie Van Leeuwenhoek 52:491–506
Brown BD, Hsu KH, Hammond EG, Glatz BA (1989) A relationship between growth and lipid accumulation in Candida cyrvata D. J Ferment Bioeng 68:344–352
Ratledge C (1994) Yeast moulds algae and bacteria as sources of lipids. In: Kamel BS, Kakuda Y (eds) Technological advances in improved and alternative sources of lipids. Blackie academic and professional, London, pp 235–291
Evans CT, Ratledge C (1983) A comparison of the oleaginous yeast Candia curvata grown on different carbon sources in continuous and batch culture. Lipids 18:623–629
Rywińska A, Juszczyk P, Wojtatowicz M, Rymowicz W (2011) Chemostat study of citric acid production from glycerol by Yarrowia lipolytica. J Biotechnol 152:54–57
Davies RJ (1992) Scale up of yeast oil technology. In: Ratledge C, Kyle DJ (eds) Industrial application of single cell oil. AOCS Press, Champaign, pp 196–218
Meesters PAEP, van de Wal H, Weusthuis R, Eggink G (1996) Cultivation of the oleaginous yeast Carptococcus curvatus in a new reactor with improved mixing and mass transfer characteristics (surer®). Biotechnol Tech 10:277–282
Browse J, Mc Court PJ, Somerville CR (1986) Fatty acid composition of leaf lipids determined after combined digestion and fatty acid methyl ester formation from fresh tissue. Anal Biochem 152:141–145
MR, ZL, TD, and J-MN conceived the study and participated in its design. MR, ZL, and TD carried out the experiments. MR wrote the first draft of the manuscript. MR, ZL, J-MN, and PF analyzed the results and assessed the culture data. MR, ZL, TD, J-MN, and PF revised the manuscript. All authors read and approved the final manuscript.
This work was funded by the French National Institute for Agricultural Research (INRA). M. Rakicka was funded by INRA. T. Dulermo and Z. Lazar were funded by the French National Research Agency (Investissements d'avenir program; reference ANR-11-BTBR-0003). Z. Lazar received financial support from the European Union in the form of an AgreenSkills Fellowship (grant agreement no. 267196; Marie-Curie FP7 COFUND People Program). We would also like to thank Jessica Pearce and Lindsay Higgins for their language editing services.
Magdalena Rakicka
Present address: Department of Biotechnology and Food Microbiology, Wrocław University of Environmental and Life Sciences, Chełmońskiego Str. 37/41, 51-630, Wrocław, Poland
INRA, UMR1319 Micalis, 78350, Jouy-en-Josas, France
, Zbigniew Lazar
, Thierry Dulermo
& Jean Marc Nicaud
AgroParisTech, UMR Micalis, Jouy-en-Josas, France
Microbial Processes and Interactions, Gembloux Agro Bio-Tech, Université de Liège, Passage des Déportés, 2, 5030, Gembloux, Belgium
Patrick Fickers
Institut Micalis, INRA-AgroParisTech, UMR1319, Team BIMLip: Biologie Intégrative du Métabolisme Lipidique, CBAI, 78850, Thiverval-Grignon, France
Search for Magdalena Rakicka in:
Search for Zbigniew Lazar in:
Search for Thierry Dulermo in:
Search for Patrick Fickers in:
Search for Jean Marc Nicaud in:
Correspondence to Magdalena Rakicka or Jean Marc Nicaud.
Yarrowia lipolytica
Oleaginous yeast
Biolipid production
Crude glycerol
Continuous culture
|
CommonCrawl
|
Static equilibrium
Atwood's machine
Projectiles / 2-D motion
Conservation laws
xaktly | Physics | Mechanics
Making work easier (but there's always a catch)
Simple machines are (simple) devices that either change the direction of a needed force or that reduce the amount of force required to do some work. An example of the latter would be a rope and pulley, so that a pulling force can be used to raise a load. In this case, the amount of downward force required to lift the load is the same as the upward force if we just lifted it; only the direction is different.
As we'll see below, other pulley arrangements not only change the direction of a required force, but also reduce the amount of force required to do the same amount of work, but always at the expense of more distance.
A good example of this idea is the inclined plane.
In the diagram, the crate can be lifted directly to height h, or it can be moved there via the inclined plane. In the first case, the work required is w = mgh. In the second, the force required to push the crate up the ramp is less than mg, but the distance up the ramp to the destination height is larger, in such a way as to keep the required work, w = F·d, the same. This is an example of the principle of conservation of energy.
The simple machines
Simple machines have been studied since the Greeks. They are fundamental parts of many more complicated mechanical machines, and the thinking was that any machine at all could be represented as a collection of one or more of those listed below. For example, a car uses wheels, pulleys, levers, screws, and so forth.
Simple machines allow us to exert more force than we might be capable of producing by spreading that force out over more distance, or iterations of some smaller distance. A wrench (really a lever) is a good example. Few of us are capable of exerting enough twisting force on a stuck bolt with our hands, but if we use a long-enough wrench, we can do it. The cost of being able to exert enough force comes in the greater distance through which we have to rotate the wrench handle, compared to just turning the bolt with our fingers.
In this section, we'll work through the list of simple machines and see how they allow us to spread force over distance to do useful work.
The most common list of simple machines is
Inclined plane
But as you'll see below, it's possible to pare that down to just two. The list isn't really that important. More important are the concepts that each have in common, namely the trade-off of reduced force for greater distance. Notice also that distance is generally proportional to time, so simple machines decrease the power required to do a job, too.
Simple machines obey the principle of conservation of energy, or more specifically, the idea that the amount of mechanical work done is independent of the path taken to change the position.
The change in work,
$$w = F \cdot d,$$
depends only on the initial and final positions of the object moved.
An inclined plane is a good example. Let's see how it works. Let's slide a 100 Kg crate up a ramp angled at 20˚, so that the bottom-left corner of the crate reaches a height of 4 m, as shown below. We'll ignore friction here, something we could add later without loss of any meaning in this example.
We need force and distance. The distance we can get by trigonometry:
$$ \begin{align} sin(20˚) &= \left( \frac{4}{d} \right), \; \text{so} \\ d &= \frac{4}{sin(20˚)} = 11.695 \; m^* \end{align}$$
*In order to avoid round-off error, I like to keep more digits around during the calculation, then round to significant digits at the end.
To calculate the force needed to push the box that distance up the ramp, we resolve the weight (gravitational force vector) into components into the ramp and down the ramp using a little trigonometry:
$$F_g = 100 Kg \cdot 9.8 \, \frac{m}{s^2} = 980 \; N$$
$$ \begin{align} F_x &= F_g sin(20˚) = 335.18 \; N \\ F_y &= F_g cos(20˚) = 920.90 \; N \\ \end{align}$$
We only need Fx for our purposes. The work is
$$ \begin{align} w &= F\cdot d \\ &= 335.28\;N (11.695 \; m) \\ &= 3921 \; J = 3.92 \; KJ \end{align}$$
Lifting straight up
Now let's compare that result to the work of just lifting the crate straight up to a height of 4 m.
$$ \begin{align} w &= mgh \\ &= F_g \, h \\ &= 980 \; N \cdot 4.0 \; m \\ &= 3920 \; J = 3.92 \; KJ \end{align}$$
We were off by a Joule, but that's just due to rounding error in the first calculation. The energies (work) are the same. This isn't proof of conservation of energy, but it does show that work is independent of path for this simple machine.
There are three "classes" of levers, first, second and third. We'll go through those and discuss their mechanical advantage properties.
All levers consist of a fulcrum, a bar, a load (Fload) and a place where force is applied (Fapp). The load is the thing we want lifted, moved or turned. The bar is the device (usually something linear) that we use to transmit the force from application to load, and the fulcrum is the point around which the bar rotates. Here we'll call the distance between load and fulcrum a and the distance between fulcrum and applied force b.
Class 1 lever
The class 1 lever is probably the most familiar to most people. We often use it when we want to lift something we can't lift with just our muscles. The fulcrum is between the load and the applied force.
The mechanical advantage of such a lever is
$$a_m = \frac{b}{a}$$
Let's say we need to raise a 100 Kg load to a height of 1m using a lever with $b = 2a.$ The mechanical advantage is 2, which means that the force required to raise the load is half of its weight,
$$ \begin{align} F_{app} &= \frac{1}{2} F_g \\ &= \frac{1}{2}(100 \; Kg)(9.8 \; m/s^2) \\ &= 490 \; N \end{align}$$
The distance over which the force must be applied, however, is the mechanical advantage multiplied by the lifting distance, or 2·1 m = 2 m. So this lever divides the required force by two (the mechanical advantage), but multiplies the required distance by two as well. In this way the work is the same for either lifting the load straight up or using the lever.
For a lever with mechanical advantage am, the required force (Fapp) is $F_g / a_m$ and the distance over which that force must be applied is am times the lifting distance.
Biomechanical example
One of many class-1 levers in the human musculoskeletal system is the system that rotates the skull up and down. The fulcrum is the point where the skull connects to the spine. The applied force is the muscles on the back of the neck (the trapezius is one of many) and the load is the weight of the front of the skull. Normal muscle tension in the bundle of muscles at the back of the neck holds the head level.
In a class-2 lever system, the load is between the fulcrum and the applied force, as shown below.
The mechanical advantage in this system is the same as a class-1 lever. One common example is a door. The fulcrum is at one edge, the load is the weight of the door, which we can concentrate at its center of mass, and the applied force is at the edge opposite the hinges.
The muscles that stand you on tip-toe are the calves (soleus & gastrocnemius). They attach to the bottom (distal end) of the femur and to the heel bone. The load of the weight of a human is transferred through the tibia, which is between the fulcrum (the ball of the foot) and the applied force (the constricting calves).
In a class-3 lever, the applied force is between the fulcrum and the load. In this configuration, the bar must be attached to the fulcrum, or else it would lift off. One example of a 3rd-class lever is a pair of tweezers, with the fulcrum at the closed end, the load at the open end, and the force in between.
In a 3rd-class lever, the maximum mechanical advantage is 1. Any position of the applied force other than directly opposite the load, results in an amplification of the force needed to move the load.
In order to raise a load held in your hand, your biceps, which attach to the strong bone of the forearm between the fulcrum (the elbow) and the hand, shorten and pull on the forearm bone (ulna), raising the load.
Pulleys are used to either re-direct the force needed to do work, to reduce it (at the expense of extending the distance over which the applied force is applied), or both.
Pulleys can either be fixed (stationary) or moving. The mechanical advantage in a pulley system is equal to the number of ropes leading to or coming from moving pulleys. In the simplest case, a single pulley is used to re-direct the force needed to move a load:
The upward force is converted to a downward force (which can be a great benefit), but there is no mechanical advantage to this system. The applied force is exactly equal to the load, as long as we're ignoring friction in the system.
2:1 pulley system
In a 2:1 pulley system, shown below, a second pulley is attached to the load, and thus moves along with it. In this system, there are two ropes attached to the moving pulley, giving a mechanical advantage of 2. Thus the force required to lift the crate will be half of the weight of the load, but for every meter of lift, we'll have to pull 2 m of rope through the system.
A 4:1 pulley system is shown below. It contains two moving pulleys, each with two ropes going to or coming from them. The four ropes attached to moving pulleys gives this system a 4:1 mechanical advantage.
The force needed to lift the load is ¼ of the load, but four times as much rope will have to be pulled through the system for each unit of lift of the load.
Pulley systems come in many shapes and sizes. A typical block-and-tackle setup is shown below.
This particular setup can be rigged as a 4:1 or 8:1 system for hauling against the kinds of heavy resistance encountered in larger sailing boats. With an 8:1 system, in order to move a sail or spar by 10 cm, we'd have to haul out 80 cm or 0.8 m of rope.
The screw
U.S. Patent office excerpt from the 1933 patent of the Phillips-head screw, J.P. Thompson
Screws and bolts of all kinds work on the same basic principle. They are essentially nails — narrow cylinders of metal — wrapped with helical inclined plane — the "threads" of the screw.
Twisting a screw — in a hole in wood, for example — causes the threads to cut into the wood. The force needed to drive a same-sized cylinder straight into the wood is divided by the pitch (slope = rise/run). The trade-off for that reduced force is a greater distance over which that force must be applied, in this case, many circles have to be turned.
A screw is just a specialized version of the inclined plane. A bolt is a screw that is meant to slide into a pre-grooved receptacle with matching threads, such as a nut or a tapped (threaded) hole.
The wedge is closely related to the inclined plane. It is used to translate a force at one angle to a force at right angles to it. The most common example is the wood-splitting wedge used to split logs along the grain of the wood.
A wedge is like two inclined planes set back-to-back. The applied force can be resolved into components along the slanted face of the wedge and perpendicular to it. It is the perpendicular force that does the useful work of pushing outward.
Using a wedge (or a maul or an axe, which are just wedges on a stick), one can prepare enough wood to heat a small cabin for the winter.
Photo: J. Cruzan, 2017
spiral/helical
A spiral is a two-dimensional curve emanating from a central point, but with each new point on the curve lying an increasing distance from that central point. A spiral is a curve in a plane.
A helix is a three-dimensional figure, more closely related to a circle than a spiral. A helix is characterized by a radius, the nearest distance between any point on the helix and a central line about which it twists, and the pitch or angle of those twists with respect to a plane normal to the central axis.
xaktly.com by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2012-2019, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to [email protected].
|
CommonCrawl
|
Files are identified by filenames, which are represented in GAP as strings. Filenames can be created directly by the user or a program, but of course this is operating system dependent.
Filenames for some files can be constructed in a system independent way using the following functions. This is done by first getting a directory object for the directory the file shall reside in, and then constructing the filename. However, it is sometimes necessary to construct filenames of files in subdirectories relative to a given directory object. In this case the directory separator is always / even under DOS or MacOS.
Section 9.3 describes how to construct directory objects for the common GAP and system directories. Using the command Filename (9.4-1) it is possible to construct a filename pointing to a file in these directories. There are also functions to test for accessibility of files, see 9.6.
For portability filenames and directory names should be restricted to at most 8 alphanumerical characters optionally followed by a dot . and between 1 and 3 alphanumerical characters. Upper case letters should be avoided because some operating systems do not make any distinction between case, so that NaMe, Name and name all refer to the same file whereas some operating systems are case sensitive. To avoid problems only lower case characters should be used.
Another function which is system-dependent is LastSystemError (9.1-1).
LastSystemError returns a record describing the last system error that has occurred. This record contains at least the component message which is a string. This message is, however, highly operating system dependent and should only be used as an informational message for the user.
When GAP is started it determines a list of directories which we call the GAP root directories. In a running GAP session this list can be found in GAPInfo.RootPaths.
The core part of GAP knows which files to read relative to its root directories. For example when GAP wants to read its library file lib/group.gd, it appends this path to each path in GAPInfo.RootPaths until it finds the path of an existing file. The first file found this way is read.
Furthermore, GAP looks for available packages by examining the subdirectories pkg/ in each of the directories in GAPInfo.RootPaths.
The root directories are specified via one or several of the -l paths command line options, see 3.1. Furthermore, by default GAP automatically prepends a user specific GAP root directory to the list; this can be avoided by calling GAP with the -r option. The name of this user specific directory depends on your operating system, it can be found in GAPInfo.UserGapRoot. This directory can be used to tell GAP about personal preferences, to always load some additional code, to install additional packages, or to overwrite some GAP files. See 3.2 for more information how to do this.
IsDirectory is a category of directories.
returns a directory object for the string string. Directory understands "." for "current directory", that is, the directory in which GAP was started. It also understands absolute paths.
If the variable GAPInfo.UserHome is defined (this may depend on the operating system) then Directory understands a string with a leading ~ (tilde) character for a path relative to the user's home directory (but a string beginning with "~other_user" is not interpreted as a path relative to other_user's home directory, as in a UNIX shell).
Paths are otherwise taken relative to the current directory.
returns a directory object in the category IsDirectory (9.3-1) for a new temporary directory. This is guaranteed to be newly created and empty immediately after the call to DirectoryTemporary. GAP will make a reasonable effort to remove this directory upon termination of the GAP job that created the directory.
If DirectoryTemporary is unable to create a new directory, fail is returned. In this case LastSystemError (9.1-1) can be used to get information about the error.
A warning message is given if more than 1000 temporary directories are created in any GAP session.
returns the directory object for the current directory.
DirectoriesLibrary returns the directory objects for the GAP library name as a list. name must be one of "lib" (the default), "doc", "tst", and so on.
The string "" is also legal and with this argument DirectoriesLibrary returns the list of GAP root directories. The return value of this call differs from GAPInfo.RootPaths in that the former is a list of directory objects and the latter a list of strings.
The directory name must exist in at least one of the root directories, otherwise fail is returned.
As the files in the GAP root directories (see 9.2) can be distributed into different directories in the filespace a list of directories is returned. In order to find an existing file in a GAP root directory you should pass that list to Filename (9.4-1) as the first argument. In order to create a filename for a new file inside a GAP root directory you should pass the first entry of that list. However, creating files inside the GAP root directory is not recommended, you should use DirectoryTemporary (9.3-3) instead.
DirectoriesSystemPrograms returns the directory objects for the list of directories where the system programs reside, as a list. Under UNIX this would usually represent $PATH.
This function returns a list of filenames/directory names that reside in the directory dir. The argument dir can either be given as a string indicating the name of the directory or as a directory object (see IsDirectory (9.3-1)). It is an error, if such a directory does not exist.
The ordering of the list entries can depend on the operating system.
An interactive way to show the contents of a directory is provided by the function BrowseDirectory (Browse: BrowseDirectory) from the GAP package Browse.
returns a directory object for the users desktop directory as defined on many modern operating systems. The function is intended to provide a cross-platform interface to a directory that is easily accessible by the user. Under Unix systems (including Mac OS X) this will be the Desktop directory in the users home directory if it exists, and the users home directory otherwise. Under Windows it will the users Desktop folder (or the appropriate name under different languages).
returns a directory object for the users home directory, defined as a directory in which the user will typically have full read and write access. The function is intended to provide a cross-platform interface to a directory that is easily accessible by the user. Under Unix systems (including Mac OS X) this will be the usual user home directory. Under Windows it will the users My Documents folder (or the appropriate name under different languages).
If the first argument is a directory object dir, Filename returns the (system dependent) filename as a string for the file with name name in the directory dir. Filename returns the filename regardless of whether the directory contains a file with name name or not.
If the first argument is a list list-of-dirs (possibly of length 1) of directory objects, then Filename searches the directories in order, and returns the filename for the file name in the first directory which contains a file name or fail if no directory contains a file name.
For example, in order to locate the system program date use DirectoriesSystemPrograms (9.3-6) together with the second form of Filename.
In order to locate the library file files.gd use DirectoriesLibrary (9.3-5) together with the second form of Filename.
In order to construct filenames for new files in a temporary directory use DirectoryTemporary (9.3-3) together with the first form of Filename.
The special filename "*stdin*" denotes the standard input, i.e., the stream through which the user enters commands to GAP. The exact behaviour of reading from "*stdin*" is operating system dependent, but usually the following happens. If GAP was started with no input redirection, statements are read from the terminal stream until the user enters the end of file character, which is usually Ctrl-D. Note that terminal streams are special, in that they may yield ordinary input after an end of file. Thus when control returns to the main read-eval-print loop the user can continue with GAP. If GAP was started with an input redirection, statements are read from the current position in the input file up to the end of the file. When control returns to the main read eval view loop the input stream will still return end of file, and GAP will terminate.
The special filename "*errin*" denotes the stream connected to the UNIX stderr output. This stream is usually connected to the terminal, even if the standard input was redirected, unless the standard error stream was also redirected, in which case opening of "*errin*" fails.
The special filename "*stdout*" can be used to print to the standard output.
The special filename "*errout*" can be used to print to the standard error output file, which is usually connected to the terminal, even if the standard output was redirected.
When the following functions return false one can use LastSystemError (9.1-1) to find out the reason (as provided by the operating system), see the examples.
IsExistingFile returns true if a file with the filename filename exists and can be seen by the GAP process. Otherwise false is returned.
IsReadableFile returns true if a file with the filename filename exists and the GAP process has read permissions for the file, or false if this is not the case.
IsWritableFile returns true if a file with the filename filename exists and the GAP process has write permissions for the file, or false if this is not the case.
IsExecutableFile returns true if a file with the filename filename exists and the GAP process has execute permissions for the file, or false if this is not the case. Note that execute permissions do not imply that it is possible to execute the file, e.g., it may only be executable on a different machine.
IsDirectoryPath returns true if the file with the filename filename exists and is a directory, and false otherwise. Note that this function does not check if the GAP process actually has write or execute permissions for the directory. You can use IsWritableFile (9.6-3), resp. IsExecutableFile (9.6-4) to check such permissions.
reads the input from the file with the filename filename, which must be given as a string.
Read first opens the file filename. If the file does not exist, or if GAP cannot open it, e.g., because of access restrictions, an error is signalled.
Then the contents of the file are read and evaluated, but the results are not printed. The reading and evaluations happens exactly as described for the main loop (see 6.1).
If a statement in the file causes an error a break loop is entered (see 6.4). The input for this break loop is not taken from the file, but from the input connected to the stderr output of GAP. If stderr is not connected to a terminal, no break loop is entered. If this break loop is left with quit (or Ctrl-D), GAP exits from the Read command, and from all enclosing Read commands, so that control is normally returned to an interactive prompt. The QUIT statement (see 6.7) can also be used in the break loop to exit GAP immediately.
Note that a statement must not begin in one file and end in another. I.e., eof (end-of-file) is not treated as whitespace, but as a special symbol that must not appear inside any statement.
Note that one file may very well contain a read statement causing another file to be read, before input is again taken from the first file. There is an upper limit of 15 on the number of files that may be open simultaneously.
reads the file with filename filename as a function and returns this function.
Reading the file as a function will not affect a global variable a.
PrintTo works like Print (6.3-4), except that the arguments obj1, \(\ldots\) (if present) are printed to the file with the name filename instead of the standard output. This file must of course be writable by GAP. Otherwise an error is signalled. Note that PrintTo will overwrite the previous contents of this file if it already existed; in particular, PrintTo with just the filename argument empties that file.
AppendTo works like PrintTo, except that the output does not overwrite the previous contents of the file, but is appended to the file.
There is an upper limit of 15 on the number of output files that may be open simultaneously.
Note that one should be careful not to write to a logfile (see LogTo (9.7-4)) with PrintTo or AppendTo.
Calling LogTo with a string filename causes the subsequent interaction to be logged to the file with the name filename, i.e., everything you see on your terminal will also appear in this file. (LogTo (10.4-5) may also be used to log to a stream.) This file must of course be writable by GAP, otherwise an error is signalled. Note that LogTo will overwrite the previous contents of this file if it already existed.
Called without arguments, LogTo stops logging to a file or stream.
Calling InputLogTo with a string filename causes the subsequent input to be logged to the file with the name filename, i.e., everything you type on your terminal will also appear in this file. Note that InputLogTo and LogTo (9.7-4) cannot be used at the same time while InputLogTo and OutputLogTo (9.7-6) can. Note that InputLogTo will overwrite the previous contents of this file if it already existed.
Called without arguments, InputLogTo stops logging to a file or stream.
Calling OutputLogTo with a string filename causes the subsequent output to be logged to the file with the name filename, i.e., everything GAP prints on your terminal will also appear in this file. Note that OutputLogTo and LogTo (9.7-4) cannot be used at the same time while InputLogTo (9.7-5) and OutputLogTo can. Note that OutputLogTo will overwrite the previous contents of this file if it already existed.
Called without arguments, OutputLogTo stops logging to a file or stream.
CRC (cyclic redundancy check) numbers provide a certain method of doing checksums. They are used by GAP to check whether files have changed.
CrcFile computes a checksum value for the file with filename filename and returns this value as an integer. The function returns fail if a system error occurred, say, for example, if filename does not exist. In this case the function LastSystemError (9.1-1) can be used to get information about the error.
will remove the file with filename filename and returns true in case of success. The function returns fail if a system error occurred, for example, if your permissions do not allow the removal of filename. In this case the function LastSystemError (9.1-1) can be used to get information about the error.
If the string str starts with a '~' character this function returns a new string with the leading '~' substituted by the users home directory as stored in GAPInfo.UserHome. Otherwise str is returned unchanged.
In general, it is not possible to read the same GAP library file twice, or to read a compiled version after reading a GAP version, because crucial global variables are made read-only (see 4.9) and filters and methods are added to global tables.
A partial solution to this problem is provided by the function Reread (and related functions RereadLib etc.). Reread( filename ) sets the global variable REREADING to true, reads the file named by filename and then resets REREADING. Various system functions behave differently when REREADING is set to true. In particular, assignment to read-only global variables is permitted, calls to NewRepresentation (79.2-1) and NewInfoClass (7.4-1) with parameters identical to those of an existing representation or info class will return the existing object, and methods installed with InstallMethod (78.2-1) may sometimes displace existing methods.
This function may not entirely produce the intended results, especially if what has changed is the super-representation of a representation or the requirements of a method. In these cases, it is necessary to restart GAP to read the modified file.
An additional use of Reread is to load the compiled version of a file for which the GAP language version had previously been read (or perhaps was included in a saved workspace). See 76.3-11 and 3.3 for more information.
It is not advisable to use Reread programmatically. For example, if a file that contains calls to Reread is read with Reread then REREADING may be reset too early.
|
CommonCrawl
|
How is this possible to do in TeX? I had a look whether this could be done with TikZ but could not find any comparable examples. I am convinced that in theory it should be possible to prepare a TikZ extension that does things like this rather simply, but at the moment I am looking for an already existing solution to accomplish such tasks. When I get more experienced with making such plots I might consider writing a basic package.
Is it possible to make such plots in LaTeX with current packages?
If yes, with TikZ? Could you give me an example how to start?
If not, what software would you recommend to prepare and include such kind of 3D graphics?
Please note that this is not the kind of question you sometimes get here that asks "Please do this for me!" I honestly tried solving this problem but did not get far at all. It would be really great to get an answer, a lot of people in my field would benefit from this.
(1) Wertz, James R. (2009). Orbit & Constellation Design & Management. New York: Springer.
This is a long answer since there are good tools for spherical geometry scattered all around, so I created a few sections addressing those tools.
I suggest to use \tdplotdrawarc . This is explained in the TikZ and PGF Manual. You need to define three angles $\alpha$, $\beta$ and $\gamma$ for the arc, theen the radius, origin, initial and final angle. I include here and example with the angles used. With this example you can build new examples explaining other angle combinations.
% rotate circle to make it look better.
Here is a post which addresses the drawing an equator when the north pole is given. A simple macro to speed up coding draw an equator when north pole is known .
Sometimes is better to say away from thinking and trying to do 3D. So I am contradicting here myself with the advise of using tikz-3dplot . Think how to draw a 3D thinking 2D (that is ellipses and arcs).
The next example is an improvement over an example shown here Spherical triangles and great circles . The code is based on @Tarass great insight. The example is shown here more to show the capabilities of Tikz and the use of it for other purposes. As I said, it is better to use, in general \tdplotdrawarc .
In spherical geometry understanding where coordinates (a point) are and how to draw arcs is a fundamental issue.
There could be confusion because spherical coordinates for mathematicians and phycisist use different symbols, the following link provides macros for conversion between spherical (azimuth, polar) and cartesian coordinates and addresses conversions in terms of geografic (latidue, altitude) coordinates as well: spherical coordinates in 3d .
Finally since TiKz do not seem to have tools to draw arcs given a center and a radius I wrote a macro and posted here .
The R package GeoMap will create Spherical project of the earth with the continent maps. I have not used it except to verify it loads and builds a map. If you combine with the package tikzDevice you will get tikz code which could be modified. Be aware that it will be a large file due to the extensive use of points for plotting.
Once this is working you should be able to implement with Sweave so that all the code is contained within the LaTeX file.
I would consider this just a workaround until a tikz package was built with pure tikz.
Not the answer you're looking for? Browse other questions tagged tikz-pgf graphics 3d or ask your own question.
How can I draw an arc from point A -> B on a 3D sphere in TikZ?
How to draw spherical figures?
How to draw 3d vector field on a line?
How to draw Timed Hybrid Petri Nets with tex?
|
CommonCrawl
|
Local uniformization
In algebraic geometry, local uniformization is a weak form of resolution of singularities, stating roughly that a variety can be desingularized near any valuation, or in other words that the Zariski–Riemann space of the variety is in some sense nonsingular. Local uniformization was introduced by Zariski (1939, 1940), who separated out the problem of resolving the singularities of a variety into the problem of local uniformization and the problem of combining the local uniformizations into a global desingularization.
Local uniformization of a variety at a valuation of its function field means finding a projective model of the variety such that the center of the valuation is non-singular. This is weaker than resolution of singularities: if there is a resolution of singularities then this is a model such that the center of every valuation is non-singular. Zariski (1944b) proved that if one can show local uniformization of a variety then one can find a finite number of models such that every valuation has a non-singular center on at least one of these models. To complete a proof of resolution of singularities it is then sufficient to show that one can combine these finite models into a single model, but this seems rather hard. (Local uniformization at a valuation does not directly imply resolution at the center of the valuation: roughly speaking; it only implies resolution in a sort of "wedge" near this point, and it seems hard to combine the resolutions of different wedges into a resolution at a point.)
Zariski (1940) proved local uniformization of varieties in any dimension over fields of characteristic 0, and used this to prove resolution of singularities for varieties in characteristic 0 of dimension at most 3. Local uniformization in positive characteristic seems to be much harder. Abhyankar (1956, 1966) proved local uniformization in all characteristic for surfaces and in characteristics at least 7 for 3-folds, and was able to deduce global resolution of singularities in these cases from this. Cutkosky (2009) simplified Abhyankar's long proof. Cossart and Piltant (2008, 2009) extended Abhyankar's proof of local uniformization of 3-folds to the remaining characteristics 2, 3, and 5. Temkin (2013) showed that it is possible to find a local uniformization of any valuation after taking a purely inseparable extension of the function field.
Local uniformization in positive characteristic for varieties of dimension at least 4 is (as of 2019) an open problem.
References
• Abhyankar, Shreeram (1956), "Local uniformization on algebraic surfaces over ground fields of characteristic p≠0", Annals of Mathematics, Second Series, 63 (3): 491–526, doi:10.2307/1970014, JSTOR 1970014, MR 0078017
• Abhyankar, Shreeram S. (1966), Resolution of singularities of embedded algebraic surfaces, Springer Monographs in Mathematics, Acad. Press, doi:10.1007/978-3-662-03580-1, ISBN 3-540-63719-2 (1998 2nd edition)
• Cossart, Vincent; Piltant, Olivier (2008), "Resolution of singularities of threefolds in positive characteristic. I. Reduction to local uniformization on Artin–Schreier and purely inseparable coverings", Journal of Algebra, 320 (3): 1051–1082, doi:10.1016/j.jalgebra.2008.03.032, MR 2427629
• Cossart, Vincent; Piltant, Olivier (2009), "Resolution of singularities of threefolds in positive characteristic. II" (PDF), Journal of Algebra, 321 (7): 1836–1976, doi:10.1016/j.jalgebra.2008.11.030, MR 2494751
• Cutkosky, Steven Dale (2009), "Resolution of singularities for 3-folds in positive characteristic", Amer. J. Math., 131 (1): 59–127, arXiv:math/0606530, doi:10.1353/ajm.0.0036, JSTOR 40068184, MR 2488485, S2CID 2139305
• Temkin, Michael (2013), "Inseparable local uniformization", J. Algebra, 373: 65–119, arXiv:0804.1554, doi:10.1016/j.jalgebra.2012.09.023, MR 2995017, S2CID 115167009
• Zariski, Oscar (1939), "The reduction of the singularities of an algebraic surface", Ann. of Math., 2, 40 (3): 639–689, doi:10.2307/1968949, JSTOR 1968949
• Zariski, Oscar (1940), "Local uniformization on algebraic varieties", Ann. of Math., 2, 41 (4): 852–896, doi:10.2307/1968864, JSTOR 1968864, MR 0002864
• Zariski, Oscar (1944a), "The compactness of the Riemann manifold of an abstract field of algebraic functions", Bulletin of the American Mathematical Society, 50 (10): 683–691, doi:10.1090/S0002-9904-1944-08206-2, ISSN 0002-9904, MR 0011573
• Zariski, Oscar (1944b), "Reduction of the singularities of algebraic three dimensional varieties", Ann. of Math., 2, 45 (3): 472–542, doi:10.2307/1969189, JSTOR 1969189, MR 0011006
External links
• "Local uniformization", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
|
Wikipedia
|
Triangle $ABC$ has side lengths $AB=7, BC=8,$ and $CA=9.$ Circle $\omega_1$ passes through $B$ and is tangent to line $AC$ at $A.$ Circle $\omega_2$ passes through $C$ and is tangent to line $AB$ at $A.$ Let $K$ be the intersection of circles $\omega_1$ and $\omega_2$ not equal to $A.$ Then $AK=\tfrac mn,$ where $m$ and $n$ are relatively prime positive integers. Find $m+n.$
[asy] unitsize(20); pair B = (0,0); pair A = (2,sqrt(45)); pair C = (8,0); draw(circumcircle(A,B,(-17/8,0)),rgb(.7,.7,.7)); draw(circumcircle(A,C,(49/8,0)),rgb(.7,.7,.7)); draw(B--A--C--cycle); label("$A$",A,dir(105)); label("$B$",B,dir(-135)); label("$C$",C,dir(-75)); dot((2.68,2.25)); label("$K$",(2.68,2.25),dir(-150)); label("$\omega_1$",(-6,1)); label("$\omega_2$",(14,6)); label("$7$",(A+B)/2,dir(140)); label("$8$",(B+C)/2,dir(-90)); label("$9$",(A+C)/2,dir(60)); [/asy]
Note that from the tangency condition that the supplement of $\angle CAB$ with respects to lines $AB$ and $AC$ are equal to $\angle AKB$ and $\angle AKC$, respectively, so from tangent-chord,\[\angle AKC=\angle AKB=180^{\circ}-\angle BAC\]Also note that $\angle ABK=\angle KAC$, so $\triangle AKB\sim \triangle CKA$. Using similarity ratios, we can easily find\[AK^2=BK*KC\]However, since $AB=7$ and $CA=9$, we can use similarity ratios to get\[BK=\frac{7}{9}AK, CK=\frac{9}{7}AK\]Now we use Law of Cosines on $\triangle AKB$: From reverse Law of Cosines, $\cos{\angle BAC}=\frac{11}{21}\implies \cos{(180^{\circ}-\angle BAC)}=-\frac{11}{21}$. This gives us\[AK^2+\frac{49}{81}AK^2+\frac{22}{27}AK^2=49\]\[\implies \frac{196}{81}AK^2=49\]\[AK=\frac{9}{2}\]so our answer is $9+2=\boxed{11}$.
|
Math Dataset
|
\begin{document}
\title{Spatially multiplexed single-photon sources based on incomplete binary-tree multiplexers} \author{Peter Adam,\authormark{1,2,*} Ferenc Bodog\authormark{3}, and Matyas Mechler\authormark{2}}
\address{\authormark{1}Institute for Solid State Physics and Optics, Wigner Research Centre for Physics,\\ P.O. Box 49, H-1525 Budapest, Hungary\\ \authormark{2}Institute of Physics, University of P\'ecs, Ifj\'us\'ag \'utja 6, H-7624 P\'ecs, Hungary\\ \authormark{3}MTA-PTE High-Field Terahertz Research Group, H-7624 P\'ecs, Hungary}
\email{\authormark{*}[email protected]}
\begin{abstract} We propose two novel types of spatially multiplexed single-photon sources based on incomplete binary-tree multiplexers. The incomplete multiplexers are extensions of complete binary-tree multiplexers, and they contain incomplete branches either at the input or at the output of them. We analyze and optimize these systems realized with general asymmetric routers and photon-number-resolving detectors by applying a general statistical theory introduced previously that includes all relevant loss mechanisms. We show that the use of any of the two proposed multiplexing system can lead to higher single-photon probabilities than that achieved with complete binary-tree multiplexers. Single-photon sources based on output-extended incomplete binary-tree multiplexers outperform those based on input-extended ones in the considered parameter ranges, and they can in principle yield single-photon probabilities higher than 0.93 when they are realized by state-of-the-art bulk optical elements. \end{abstract}
\section{Introduction} The development of single-photon sources (SPSs) is of utmost importance for the effective realization of a number of experiments in the fields of photonic quantum technology and quantum information processing~\cite{EisamanRSI2011, MScott2020}. A promising realization of periodic SPSs are the heralded single-photon sources that can yield highly indistinguishable single photons in near-perfect spatial modes with known polarization \cite{PittmanOC2005, Mosley2008, Brida2011, RamelowOE2013, MassaroNJP2019}. In such sources, the detection of one member of a correlated photon pair generated in spontaneous parametric down-conversion (SPDC) or spontaneous four-wave mixing (SFWM) heralds the presence of its twin photon. The inherent probabilistic nature of the photon pair generation in these nonlinear sources results in the occasional occurrence of multiphoton events in the pair generation. This detrimental effect can be reduced by various multiplexing techniques such as spatial multiplexing \cite{Migdall2002, ShapiroWong2007, Ma2010, Collins2013, Meany2014, Francis2016, KiyoharaOE2016} and time multiplexing \cite{Pittman2002, Jeffrey2004, Mower2011, Schmiegelow2014, Francis2015, Kaneda2015, Rohde2015, XiongNC2016, Hoggarth2017, HeuckNJP2018, Kaneda2019, Lee2019, MagnoniQIP2019} where heralded photons generated in a set of multiplexed units realized in space or in time are rerouted to a single output mode by a switching system. In multiplexed SPSs, the multi-photon noise can be suppressed by keeping the mean photon number of the generated photon pairs low in a multiplexed unit, while the high probability of successful heralding in the whole system can be guaranteed by the use of several multiplexed units. Multi-photon events can also be reduced by using single-photon detectors with photon-number-resolving capabilities for heralding \cite{RamelowOE2013, BonneauNJP2015, KiyoharaOE2016, Bodog2020}. High-efficiency inherent photon-number resolving detectors (PNRDs) in various realizations such as transition edge sensors \cite{Lita2008, Fukuda2011, Schmidt2018, Fukuda2019} or superconducting nanowire detectors~\cite{Divochiy2008, Jahanmirinejad2012} are also available for this task.
An unavoidable issue of real multiplexed systems is the appearance of various losses of the optical elements in both the heralding stage and the multiplexing system that leads to the limitation of the performance of multiplexed SPSs \cite{MazzarellaPRA2013, BonneauNJP2015}. Full statistical frameworks have already been developed for the description of any kind of multiplexed SPSs using various photon detectors that take all relevant loss mechanisms into account \cite{Adam2014, Bodog2016, Bodog2020}. These frameworks make it possible to optimize multiplexed SPSs, that is, to maximize the output single-photon probability by determining the optimal values of the system size and the mean number of photon pairs generated in the multiplexed units for a given set of loss parameters. The analysis of various multiplexed SPSs showed that the single-photon probability that can be achieved in these systems after the optimization are different even by using identical optical elements in the setups. This finding motivates the development of novel multiplexing schemes leading to higher single-photon probabilities.
In spatial multiplexing, which is the topic of the current research, several individual pulsed heralded SPSs are used in parallel. In these systems, after a successful heralding event in one of the multiplexed units, the heralded signal photon is rerouted to the single output by a set of binary photon routers. In the literature, these routers were proposed to be arranged into an asymmetric (chain) or a symmetric (binary-tree) structure \cite{MazzarellaPRA2013, BonneauNJP2015}. Spatial multiplexing has been realized experimentally up to two multiplexed units by using SFWM in photonic crystal fibers \cite{Collins2013, Francis2016}, and up to four multiplexed units by using SPDC in bulk crystals \cite{Ma2010,KiyoharaOE2016} and waveguides \cite{Meany2014}. In all these experiments symmetric, that is, complete binary-tree multiplexers were used.
In this paper, we propose two novel types of spatially multiplexed SPSs based on incomplete binary-tree multiplexers built with asymmetric binary routers. We analyze the proposed systems in detail by applying the statistical theory introduced in Ref.~\cite{Bodog2020} for describing multiplexed SPSs equipped with PNRDs. We show that the proposed schemes can yield higher single-photon probabilities than the complete binary-tree scheme realized thus far in experiments.
\section{Incomplete binary-tree multiplexers}\label{sec:mplxers} The idea of any spatial multiplexing scheme is to convey heralded photons generated in a set of multiplexed units (MUs) to the single output of a multiplexer characterized by a given geometry of a set of binary (2-to-1) photon routers (PRs). Each of the multiplexed units contain a nonlinear photon pair source, a detector used to herald the presence of a signal photon of a photon pair by detecting the corresponding idler (twin) photon, and optionally a delay line placed in the path of the signal photon. Using such delay lines might be required in order to introduce a sufficient delay into the arrival time of the signal photons to the inputs of the multiplexer arms, and thus to enable the operation of the logic controlling the routers. When the detector is activated by the presence of one or more idler photons of photon pairs generated by the nonlinear photon pair source in a multiplexed unit, the corresponding signal photons are coupled into the multiplexer. In the most general case, photon-number-resolving detectors (PNRDs) can be used to detect the idler photons. These detectors can realize any detection strategy defined by the actual detected photon numbers, that is, the set of predefined number of idler photons for which the corresponding signal photons generated in the nonlinear process are allowed to enter the multiplexer. As for the periodicity of the single-photon source required in most of the applications, it can be guaranteed by pulsed pumping of the photon pair source.
\begin{figure*}
\caption{Schematic figure of a bulk optical photon router PR$_i$. PBSs denote polarizing beam splitters, PCs are Pockels cells. $V_r$ and $V_t$ denote transmission coefficients characterizing the losses for Input$_1$ and Input$_2$, respectively. }
\label{fig:Router}
\end{figure*} After the generation of photon pairs and the detection of the idler photons, the corresponding signal photons are conveyed into a multiplexer system characterized by a particular arrangement of a set of photon routers (PRs). In our analysis, all the PRs forming the spatial multiplexer are assumed to be identical. Routers are usually assumed to be symmetric, however, this restriction is not necessary, routers can be asymmetric, that is, they might have different transmission coefficients assigned to their two input ports. Figure~\ref{fig:Router} presents a possible bulk optical realization of such an asymmetric binary photon router. The building blocks of these routers are two Pockels-cells (PCs) serving as possible entrance points of the signal photons generated in the multiplexed units, and two polarizing beam splitters (PBSs), one of which acting as the output of the PR.
The polarization of the signal photons is known at the two input ports of the router. The PCs controlled by a priority logic can modify the polarization of these photons so that the PBSs can select and reroute the photons in the chosen mode to the output of the routers and eventually to the output of the whole multiplexer. If a mode is not selected, it can be directed out of the system or it can be absorbed by a suitable optical element.
Asymmetric binary routers are characterized by two transmission coefficients $V_r$ and $V_t$ corresponding to the transmission probabilities of the photons entering the router at Input$_1$ and Input$_2$, respectively. In the case of the router presented in Fig.~\ref{fig:Router} $V_r$ quantifies the losses due to the transmission through a PC and the two reflections in the PBSs. The other transmission, $V_t$, describes the losses introduced by the transmission through a PC and a PBS. These transmissions also contain an additional propagation loss in the router. Later on in this paper $V_t$ and $V_r$ will be referred to as the \emph{transmission and reflection efficiencies} and in all the schemes discussed in our work we will use routers with the coefficients $V_r$ and $V_t$ belonging to the left and right inputs of the router, respectively.
Previous papers aiming at theoretical modeling or experimental realization of periodic single-photon sources based on spatial multiplexing used two main types of multiplexers. One of them is an asymmetric architecture in which the routers are arranged into a chain structure, that is, the outputs of the newly added routers are always coupled to one of the inputs of the previously added router \cite{MazzarellaPRA2013, BonneauNJP2015}. The other geometry is a symmetric structure in which the constituent routers are arranged into a complete binary-tree multiplexer (CBTM) \cite{Migdall2002, ShapiroWong2007, Ma2010, Collins2013, Meany2014, BonneauNJP2015, Francis2016, KiyoharaOE2016, Adam2014, Bodog2016, Bodog2020}.
An asymmetric architecture can have any number of inputs $N$. However, in the case of the symmetric arrangement the number of inputs is restricted to a power of two, that is, $N=2^m$, where $m$ is the number of levels in the symmetric multiplexer. Below we propose two novel arrangements presented in Figs.~\ref{fig:branch_scheme} and \ref{fig:root-scheme} that are essentially incomplete binary-tree multiplexers in which the number of inputs is arbitrary. In these schemes, an initially $m$-level symmetric multiplexer is extended step-by-step toward another, $m+1$-level symmetric multiplexer by adding new photon routers and multiplexed units to the system. In Figs.~\ref{fig:branch_scheme} and \ref{fig:root-scheme} PR$_i$s denote asymmetric binary (2-to-1) photon routers and MU$_i$s represent multiplexed units. The various inputs (or arms) of the multiplexers are numbered from left to right, their overall number $N$ is equal to the number of MUs.
The first proposed novel multiplexing scheme is presented in Fig.~\ref{fig:branch_scheme}. \begin{figure}
\caption{Schematic diagram of an input-extended incomplete binary-tree multiplexer (IIBTM). PR$_i$s and MU$_i$s denote binary photon routers and multiplexed units, respectively. Routers with a light red background form a 3-level complete binary-tree multiplexer (CBTM). Numbering of the PRs reflect the order in which they are added to the multiplexer.}
\label{fig:branch_scheme}
\end{figure} In this scheme, new asymmetric PRs are added to the inputs of an initially $m$-level symmetric multiplexer indicated by light red background in the figure one by one from left to right. This building strategy is reflected by the numbering of the PRs in the figure. This type of multiplexing scheme will be referred to as \emph{input-extended incomplete binary-tree multiplexer} (IIBTM).
The structure of the other proposed novel incomplete binary-tree multiplexer presented in Fig.~\ref{fig:root-scheme} is also based on an initially complete binary-tree multiplexer indicated by light red background in the figure. The next step is to couple the output of the initial symmetric multiplexer into one of the inputs of a newly added router. In the figure, such a novel router is indicated by PR$_8$. Let us assume that always the left input of the novel router is used in such a situation. Then another router is added to the other (right) input of the previously added router. In the figure, such a router is denoted by PR$_9$, and it is built onto the right input of PR$_8$. Then the subsequent new routers are added to the incomplete branch of the multiplexer one by one from left to right until the given level is completed. This process is repeated until an $m+1$-level symmetric multiplexer is formed. \begin{figure}
\caption{Schematic diagram of an output-extended incomplete binary-tree multiplexer (OIBTM). PR$_i$s and MU$_i$s denote binary photon routers and multiplexed units, respectively. Routers with a light red background form a 3-level complete binary-tree multiplexer (CBTM). Numbering of the PRs reflect the order in which they are added to the multiplexer.}
\label{fig:root-scheme}
\end{figure} This building strategy is represented by the numbering of the PRs in the figure. Throughout our paper, this arrangement will be referred to as \emph{output-extended incomplete binary-tree multiplexer} (OIBTM).
Next, we introduce the formulas characterizing the transmission through the various arms of the multiplexers. This quantity will be termed as \emph{total transmission coefficient} and denoted by $V_n$. Its role is explained in detail in Sec.~\ref{sec:stat-theo}.
The formula describing the total transmission coefficient $V_n^{\rm sym}$ of the $n$th arm of a CBTM containing $m$ levels and $N=2^m$ inputs is \begin{equation} V_n^{\rm sym}=V_r^{m-H(n-1)}V_t^{H(n-1)}, \quad n=[1,2,\dots,N],\label{eq:sym:seqnr} \end{equation} where $H(x)$ denotes the Hamming weight of $x$, that is, the number of ones in its binary representation.
In the case of an IIBTM let us assume that the overall number of inputs is $N$. In Fig.~\ref{fig:branch_scheme} this number is $N=11$. Denote the number of inputs or MUs at the level with the highest number by $N_1$. In the figure on level 4 there are 3 routers (PR$_8$ to PR$_{10}$) with 6 inputs, therefore $N_1=6$. Then the total transmission coefficient $V_n^{\rm in}$ characterizing the $n$th arm of an IIBTM can be expressed as \begin{equation} \begin{aligned} V_n^{\rm in}&=V_r^{m_1+1-H(n-1)}V_t^{H(n-1)}&&\text{if}\quad 0<n\le N_1,\\ V_n^{\rm in}&=V_r^{m_1-H(n-N_1/2-1)}V_t^{H(n-N_1/2-1)}&& \text{if} \quad N_1<n\le N. \end{aligned}\label{eq:branch:seqnr} \end{equation} Both quantities $m_1$ and $N_1$ are determined by the overall number of inputs $N$. The number of levels in the initial symmetric multiplexer is $m_1=\lfloor\log_2N\rfloor$, where $\lfloor x\rfloor$ denotes the floor function that gives as output the greatest integer less than or equal to $x$. In the figure $m_1=3$. Finally, the value $N_1$ can be derived as $N_1=2(N-2^{m_1})$.
As an example, we present the list of total transmission coefficients $V_n^{\rm in}$ characterizing the arms of the multiplexer of the IIBTM scheme shown in Fig.~\ref{fig:branch_scheme}: \begin{equation} V_n^{\rm in}=[V_r^4, V_r^3V_t, V_r^3V_t, V_r^2 V_t^2, V_r^3 V_t, V_r^2 V_t^2, V_r V_t^2, V_r^2 V_t, V_r V_t^2, V_r V_t^2, V_t^3]. \end{equation}
In the case of OIBTMs, assume again that the overall number of inputs of this multiplexer is $N$. In Fig.~\ref{fig:root-scheme} this number is $N=11$. Denote the number of inputs of the initial CBTM by $N_2$. In the figure, it is $N_2=8$. The number of inputs on the unfinished level under construction on the incomplete branch of the multiplexer below the output PR is denoted by $N_3$. In the figure $N_3=2$, that is, the number of inputs of PR$_{10}$. Then the total transmission coefficients $V_n^{\rm out}$ characterizing the arms of the OIBTM can be expressed as \begin{equation} \begin{array}{l@{\qquad}c@{\qquad}r@{\;}c@{\;}c@{\;}c@{\;}l} V_n^{\rm out}=V_r^{m_2-H(n-1)}V_t^{H(n-1)}, &\text{if} & 0&<&n&\le& N_2\\ V_n^{\rm out}=V_tV_r^{m_3+1-H(n-N_2-1)}V_t^{H(n-N_2-1)},&\text{if} & N_2&<&n&\le& N_2+N_3\\ V_n^{\rm out}=V_tV_r^{m_3-H(n-N_2-\frac{N_3}{2}-1)}V_t^{H(n-N_2-\frac{N_3}{2}-1)}, & \text{if} & N_2+N_3&<&n&\le& N. \end{array}\label{eq:root:seqnr} \end{equation} All the quantities $m_i$ and $N_i$ can be derived from the overall number of inputs $N$. The value $m_2$ corresponding to the number of levels belonging to the branch of the OIBTM containing the initial symmetric multiplexer can be derived as $m_2=\lceil\log_2(N)\rceil$, where $\lceil x\rceil$ denotes the ceiling function that returns with the least integer greater than or equal to $x$. In the figure $m_2=4$. Accordingly, $N_2$ can be expressed as $N_2=2^{m_2-1}$. The number of finished levels, that is, the number of levels in the complete subtree on the incomplete branch of the multiplexer is $m_3=\lfloor\log_2(N-N_2)\rfloor$, where $\lfloor x\rfloor$ denotes the floor function. In the figure, PR$_9$ itself forms a complete 1-level subtree, therefore $m_3=1$. On level $m_3$, the number of inputs are $N_4=2^{m_3}$ ($N_4=2$ in the figure), but it may occur that new routers have been already added to some inputs on this level. Finally, the number of inputs on the next level of the incomplete branch is $N_3=2(N-N_2-N_4)$.
As an example, we show the total transmission coefficients $V_n^{\rm out}$ of the OIBTM presented in Fig.~\ref{fig:root-scheme}: \begin{equation} V_n^{\rm out}=[V_r^4,V_r^3V_t,V_r^3V_t,V_r^2V_t^2,V_r^3V_t,V_r^2V_t^2,V_r^2V_t^2,V_r V_t^3, V_r^2V_t,V_r V_t^2,V_t^2]. \end{equation}
Note that the total transmission coefficients presented in Eqs.~\eqref{eq:sym:seqnr}, \eqref{eq:branch:seqnr} and \eqref{eq:root:seqnr} are indexed according to their positions in the multiplexer, that is, the subsequent values are generally not sorted into an ascending or descending order.
\section{Statistical theory}\label{sec:stat-theo}
In order to analyze the proposed systems in detail we start from the general statistical theory introduced in Ref.~\cite{Bodog2020} that can be applied for describing any periodic SPSs based on spatial multiplexing and equipped with PNRDs. In this framework, it is assumed that $l$ photon pairs are generated in the $n$th multiplexed unit by a nonlinear source and the detection of a predefined number of photons $j$ ($j\leq l$) during a heralding event triggers the opening of the input ports of the multiplexer. In general, $i$ signal photons can be expected at the output of the multiplexing system, the probability of which can be written as \begin{eqnarray}
P_i^{(S)}=\big(1-\sum_{j\in S}P^{(D)}(j)\big)^N\delta_{i,0}+\sum_{n=1}^N\left[\big(1-\sum_{j\in S}P^{(D)}(j)\big)^{n-1}\times\sum_{l=i}^\infty\sum_{j\in S} P^{(D)}(j|l)P^{(\lambda)}(l)V_n(i|l)\right].\label{general_formula} \end{eqnarray} In this formula $P_n^{(D)}(j)$ denotes the probability of detecting exactly $j$ photons in the $n$th multiplexed unit.
$P^{(D)}(j|l)$ is the conditional probability of registering $j$ photons, provided that $l$ photons arrive at the detector. The probability of generating $l$ photon pairs in the $n$th multiplexed unit when the mean photon number of the generated photon pairs is $\lambda_n$ in that unit is denoted by $P_n^{(\lambda_n)}(l)$.
$V_n(i|l)$ stands for the conditional probability of the event that the output of the multiplexer is reached by $i$ photons, provided that the number of signal photons arriving from the $n$th multiplexed unit into the system is $l$. The set $S$ describes the detection strategy, that is, it contains elements from the set of positive integers $\mathbb{Z}^+$ up to a predefined boundary value $J_b$.
The first term in Eq.~\eqref{general_formula} contributes only to the case where no photon reaches the output, that is, to the probability $P_0^{(S)}$. It describes the case when none of the PNRDs in the multiplexed units have detected a photon number in $S$. The second term in Eq.~\eqref{general_formula} describes the case that the heralding event occurs in the $n$th unit. The first factor in this term is the probability that none of the first $n-1$ detectors have detected a photon number in $S$. The second factor corresponds to the event that out of the $l$ photons entering the multiplexer from the $n$th multiplexed unit after heralding, only $i$ reach the output due to the losses of the multiplexer. The summation over $n$ in the second term takes into consideration all the possible contributions to the probability $P_i^{(S)}$.
In Eq.~\eqref{general_formula} the various probabilities can be expressed as follows. The probability $P^{(\lambda)}(l)$ represents that a nonlinear source generates $l$ photon pairs. In our calculations we use Poisson distribution, that is, \begin{equation} P^{(\lambda)}(l)=\frac{\lambda^l e^{-\lambda}}{l!}, \end{equation}
$\lambda$ representing the mean photon number of the photon pairs generated by the nonlinear source and arriving at the detectors in the multiplexed units, that is, the input of the heralding process. Hence, we refer to it by the term \emph{input mean photon number} in the following. Poissonian distribution is valid for multimode SPDC or SFWM processes, that is, for weaker spectral filtering \cite{Avenhaus2008, Mauerer2009, Takesue2010, Almeida2012, Collins2013, Harder2016, KiyoharaOE2016}. Assuming this distribution makes it possible to compare our results with the ones presented in a significant part of the literature related to SPS, which were also obtained for Poissonian distribution~\cite{Jeffrey2004, ShapiroWong2007, Ma2010, Mower2011, MazzarellaPRA2013, Adam2014, Bodog2016, MagnoniQIP2019}. The formula for the conditional probability $P^{(D)}(j|l)$ can be obtained as \begin{eqnarray}
P^{(D)}(j|l)=\binom{l}{j}V_D^j(1-V_D)^{l-j}, \end{eqnarray} where $l$ and $j$ are the number of the generated and detected photons inside a multiplexed unit, respectively, by using a detector with efficiency $V_D$. Finally, the total probability $P^{(D)}(j)$ reads \begin{eqnarray}
P^{(D)}(j)=\sum_{l=j}^\infty P^{(D)}(j|l)P^{(\lambda)}(l). \end{eqnarray} The conditional probability \begin{equation}
V_n(i|l)=\binom{l}{i}V_n^i(1-V_n)^{l-i}\label{eq:Vn:cond} \end{equation} describes the case when $i$ signal photons reach the output of the whole multiplexer given that $l$ signal photons are generated in the $n$th multiplexed unit. For analyzing a particular setup the corresponding total transmission coefficient $V_n$ should be substituted into Eq.~\eqref{eq:Vn:cond}. In the case of the schemes studied in this paper the formulas in Eqs.~\eqref{eq:sym:seqnr}, \eqref{eq:branch:seqnr}, and \eqref{eq:root:seqnr} can be used for the CBTM, the IIBTM, and the OIBTM schemes, respectively. We note that
for the priority logic that corresponds to Eq.~\eqref{general_formula} the preferred multiplexed unit is the one with the smallest sequential number $n$ when heralding events occur in multiple units. Therefore, it is appropriate to reorder the numbering of the multiplexed units in these equations so that the corresponding total transmission coefficients $V_n$ are arranged into a decreasing order. The numbering of the multiplexer arms having identical total transmission coefficients $V_n$ is arbitrary. Applying such an indexing the multiplexer arm with the highest $V_n$ corresponding to the smallest loss is preferred by the priority logic when multiple heralding events occur that can result in higher single-photon probabilities.
Using the presented framework, the optimization of the considered systems can be accomplished in the following way. We fix the total transmission coefficients $V_n$ of the systems, that is, $V_n^{\rm sym}$, $V_n^{\rm in}$, and $V_n^{\rm out}$, for the CBTM, the IIBTM, and the OIBTM, respectively. This means that the reflection and transmission efficiencies of the router $V_r$ and $V_t$, respectively, and the general transmission coefficient $V_b$ are fixed. We also fix the detection strategy $S$ and the detector efficiency $V_D$. Then two parameters are left to be optimized that are the input mean photon number $\lambda$ and the number of multiplexed units $N$. First, we determine the optimal value of the input mean photon number $\lambda$ for which the single-photon probability $P_1$ is the highest for a fixed value of the number of multiplexed units $N$. This probability is termed as the \emph{achievable single-photon probability} and denoted by $P_{1,N}$ while the corresponding photon number is called \emph{optimal input mean photon number} and denoted by $\lambda_\text{opt}$. We repeat this procedure for all reasonable values of the number of multiplexed units $N$ and select the optimal value $N_{\text{opt}}$ for which the achievable single-photon probability $P_{1,N}$ is the highest. This probability is termed as the \emph{maximal single-photon probability}, denoted by $P_{1,\max}$ and it belongs to the \emph{optimal number of multiplexed units} $N_\text{opt}$ and the optimal input mean photon number $\lambda_\text{opt}$ corresponding to $N_\text{opt}$. We remark that, although $\lambda_\text{opt}$ can be found for every value of $N$, we do not use a separate term for the one belonging to the optimal number of multiplexed units $N_\text{opt}$.
We note that in the following we use the superscripts ``out'', ``in'', and ``sym'' for all these quantities to denote results achieved for SPSs based on OIBTM, IIBTM, and CBTM, respectively. \section{Results} In this section, we summarize our results regarding the optimization of SPSs based on the proposed incomplete binary-tree multiplexers composed of asymmetric routers. Asymmetric routers with high transmission efficiencies can be realized with bulk optical elements, as it is presented in Fig.~\ref{fig:Router}. Therefore, the ranges of the various transmission efficiencies are chosen so that their upper boundaries correspond to the best loss parameters of state-of-the-art bulk optical elements. Accordingly, the upper boundaries of the ranges of the reflection and transmission efficiencies of the router are taken to be $V_r=0.99$ and $V_t=0.985$, respectively \cite{Peters2006, Kaneda2019}. For the detector efficiency, we choose $V_D=0.98$ as the upper boundary because this value is the highest one reported in the literature \cite{Fukuda2011}. The general transmission coefficient $V_b$ is strongly affected by the actual experimental realization of the system; we assume the value of $V_b=0.98$ for its highest feasible value.
\begin{figure}\label{fig:3}
\end{figure}
Let us first clarify the role of the detection strategy. First, we determine the ranges of the loss parameters for which the maximal single-photon probability $P_{1,\max}$ achieved with SPSs based on the proposed multiplexing schemes and applying the $S=\{1,2\}$ detection strategy surpasses the same probability achieved by applying the SPD strategy. Fig.~\ref{fig:3}(a) shows the difference $\Delta_P^{\text{out}, S} = P_{1,\max}^{\text{out},S=\{1,2\}}- P_{1,\max}^{\text{out},\text{SPD}}$ between the maximal single-photon probabilities for SPSs based on OIBTM obtained by assuming $S=\{1,2\}$ detection strategy and SPD strategy, respectively, as a function of the transmission efficiency $V_t$ and the reflection efficiency $V_r$ for the detector efficiency $V_D=0.8$ and the general transmission coefficient $V_b=0.9$. Below the continuous line, that is, for smaller values of the reflection and transmission efficiencies $V_r$ and $V_t$, respectively, the detection strategy $S=\{1,2\}$ outperforms the SPD strategy.
Fig.~\ref{fig:3}(b) shows the same quantity $\Delta_P^{\text{out}, S}$ as a function of the detector efficiency $V_D$ and the reflection efficiency $V_r$ for the transmission efficiency $V_t=0.8$ and the general transmission coefficient $V_b=0.9$. Below the line, the detection strategy $S=\{1,2\}$ outperforms the SPD strategy. The figure shows that this difference does not have a strong dependence on the detector efficiency $V_D$. We note that increasing the general transmission coefficient $V_b$ would cause the level $\Delta_P^{\text{out}, S}=0$ to shift toward smaller values of the reflection and transmission efficiencies $V_r$ and $V_t$, respectively. We calculated similar differences $\Delta_P^{\text{in},S}$ for SPSs based on IIBTM and $\Delta_P^{\text{sym},S}$ for SPSs based on CBTM. In these cases, we also found that the $S=\{1,2\}$ strategy outperforms the SPD strategy only for smaller values of the reflection and transmission efficiencies $V_r$ and $V_t$, respectively. Therefore, in our analysis we use the lower boundary 0.9 of the ranges for the efficiencies $V_r$, $V_t$, and $V_D$ ensuring that SPD is the optimal detection strategy for the whole considered parameter ranges. As all the subsequent results are obtained for the SPD strategy, henceforth we do not indicate this fact.
\begin{figure}
\caption{The achievable single-photon probabilities $P_{1,N}$ as a function of the number of multiplexed units $N$ for SPSs based on (a) OIBTM for the parameters $V_b=0.98$, $V_D=0.9$, $V_r=0.92$, and $V_t=0.9$, and on (b) IIBTM for the parameters $V_b=0.98$, $V_D=0.95$, $V_r=0.92$, and $V_t=0.9$. For comparison, the same quantity is presented for CBTM, denoted by red squares.}
\label{fig:root_Ncompare}
\end{figure} Next, we discuss the properties of the achievable single-photon probabilities for the proposed schemes. Figure~\ref{fig:root_Ncompare} shows typical results for the achievable single-photon probabilities $P_{1,N}$ as a function of the number of multiplexed units $N$. In Fig.~\ref{fig:root_Ncompare}(a) we present results for SPSs based on OIBTM for the general transmission coefficient $V_b=0.98$, the detector efficiency $V_D=0.9$, the reflection efficiency $V_r=0.92$, and the transmission efficiency $V_t=0.9$, while Figure~\ref{fig:root_Ncompare}(b) is an example for SPSs based on IIBTM for the parameters $V_b=0.98$, $V_D=0.95$, $V_r=0.92$, and $V_t=0.9$. Note that CBTMs are special cases of incomplete binary-tree multiplexers for certain values of the number of multiplexed units, therefore the point sequences presented in Fig.~\ref{fig:root_Ncompare} contain results for CBTM also. These points are marked with red squares. In previous studies \cite{Adam2014, Bodog2020} it was found that for CBTM, the achievable single-photon probability $P_{1,N}$ as a function of the number of multiplexed units $N$ has a single maximum. This is due to the fact that increasing the system size, that is, the number of levels in the multiplexer the losses in the system are increased, that is, all the total transmission coefficients $V_n$ assigned to the various branches of the multiplexer are decreased that deteriorates the benefit of multiplexing. However, the achievable single-photon probabilities presented in Fig.~\ref{fig:root_Ncompare} for SPSs based on OIBTM and IIBTM show local maxima for values of the number of multiplexed units $N$ that are between the special power-of-two numbers of multiplexed units characterizing the CBTM. The absolute maxima of the achievable single-photon probabilities presented in Fig.~\ref{fig:root_Ncompare} for SPSs based on incomplete binary-tree multiplexers are higher than the maximum of the same quantity for SPSs based on CBTM. These maxima can occur either for smaller or larger values of the number of multiplexed units $N$ than for CBTM. As no simple rule can be found for this behavior with respect to the parameters $V_b$, $V_D$, $V_r$, and $V_t$, we did not analyze this problem in detail. We note that the breaking points in the point sequences representing OIBTM in Fig.~\ref{fig:root_Ncompare}(a) correspond to the case when a new router is added to a complete symmetric subtree on the incomplete branch of the OIBTM. However, in the case of IIBTM in Fig.~\ref{fig:root_Ncompare}(b) the breaking points can be observed only when a new router is added to a CBTM.
Next, we compare the performance of the two proposed incomplete binary-tree multiplexer schemes and that of the CBTM scheme. The results of these calculations are presented in Fig.~\ref{fig:P1diffs}. \begin{figure}\label{fig:P1diffs}
\end{figure} Figure~\ref{fig:P1diffs}(a) shows the difference $\Delta_P^{\text{out}-\text{sym}}=P_{1,\max}^{\text{out}}-P_{1,\max}^{\text{sym}}$ between the maximal single-photon probabilities for SPSs based on OIBTM and on CBTM, respectively, as a function of the transmission efficiency $V_t$ and the reflection efficiency $V_r$ for the general transmission coefficient $V_b=0.98$ and the detector efficiency $V_D=0.9$. The corresponding function $\Delta_P^{\text{in}-\text{sym}}=P_{1,\max}^{\text{in}}-P_{1,\max}^{\text{sym}}$ for SPSs based on IIBTM and on CBTM, respectively, can be seen in Fig.~\ref{fig:P1diffs}(b). The figures show that SPSs based on either OIBTM or IIBTM outperform SPSs based on CBTM on the considered range of the parameters. It is in accordance with the expectations, as CBTMs are special cases of OIBTMs and IIBTMs. The advantage introduced by the OIBTM or the IIBTM is smaller for higher values of the reflection efficiencies $V_r$ or the transmission efficiencies $V_t$. From the figures one can also deduce that for a fixed value of the transmission efficiency $V_t$ or the reflection efficiency $V_r$ the highest differences $\Delta_P^{\text{out}-\text{sym}}$ or $\Delta_P^{\text{in}-\text{sym}}$ can be obtained for SPSs based on multiplexers equipped with symmetric routers, that is, when $V_r=V_t$. The maximal value of the difference $\Delta_{P,\max}^{\text{out}-\text{sym}}$ is $0.026$ that can be achieved at $V_r=V_t=0.9$ while the maximal value of the difference $\Delta_P^{\text{in}-\text{sym}}$ is $0.019$ that occurs for the values $V_r=V_t=0.949$. Note that the details of the functions of Figs.~\ref{fig:P1diffs} including the particular data of these maxima are affected by the actual values of the detector efficiency $V_D$ and the general transmission coefficient $V_b$. However, the main characteristics of these functions remain the same. We also remark that, due to the asymmetry of both OIBTMs and IIBTMs, the images presented in Fig.~\ref{fig:P1diffs} are asymmetric.
\begin{figure}\label{fig:out-in}
\end{figure}
Finally, we compare the performance of the two novel proposed incomplete binary-tree multiplexer schemes. In Fig.~\ref{fig:out-in}(a) we show the difference $\Delta_P^{\text{out}-\text{in}}=P_{1,\max}^{\text{out}}-P_{1,\max}^{\text{in}}$ between the maximal single-photon probabilities for SPSs based on OIBTM and on IIBTM, respectively, as a function of the transmission efficiency $V_t$ and the reflection efficiency $V_r$ for the general transmission coefficient $V_b=0.98$ and the detector efficiency $V_D=0.9$. According to the figure, OIBTM outperforms IIBTM in the whole considered ranges of the transmission and reflection efficiencies $V_t$ and $V_r$, respectively. Intuitively, this observation can be explained as follows. In the case of IIBTM, the addition of new routers to the multiplexer always leads to new branches with higher losses, that is, smaller total transmission efficiencies $V_n$ than the ones in the initial CBTM. On the contrary, for OIBTM, the total transmission efficiencies $V_n$ of the novel branches of OIBTM are always higher than the ones characterizing the initial CBTM. As we described in Sec.~\ref{sec:stat-theo}, the priority logic prefers multiplexer arms with higher $V_n$ in the case of multiple heralding events, therefore this property of OIBTM can result in higher single-photon probabilities. For the values of the general transmission coefficient $V_b$ and the detector efficiency $V_D$ used in these calculations, the highest differences between the maximal single-photon probabilities $P_{1,\max}$ of the two schemes are $\Delta_P^{\text{out}-\text{in}}\approx 0.016$ at $V_t\approx 0.9$ and $V_r\approx0.92$.
The results thus far showed that by using incomplete binary-tree multiplexers, it is possible to increase the maximal single-photon probability. However, from an experimental point of view, the number of optical elements required to realize these multiplexers is also important. In Fig.~\ref{fig:out-in}(b) we show the difference $\Delta_N^{\rm out-in}=N_{\text{opt}}^{\rm out}-N_{\text{opt}}^{\rm in}$ between the optimal number of multiplexed units for SPSs based on OIBTM and on IIBTM, respectively, as a function of the transmission efficiency $V_t$ and the reflection efficiency $V_r$ for the general transmission coefficient $V_b=0.98$ and the detector efficiency $V_D=0.9$. The figure shows that the difference $\Delta_N^{\rm out-in}$, that is, the experimentally optimal choice of the multiplexing scheme, is strongly affected by the efficiencies $V_t$ and $V_r$ of the routers. The difference fluctuates between positive and negative values, that is, for some parameters the number of multiplexed units is smaller for SPSs based on OIBTM, for other parameters this quantity is smaller for SPSs based on IIBTM. For small values of the reflection and transmission efficiencies $V_r$ and $V_t$, respectively, the absolute difference $\left|\Delta_N^{\rm out-in}\right|$ is rather small while for high values of these efficiencies $\left|\Delta_N^{\rm out-in}\right|$ increases. The difference between the optimal number of multiplexed units can be as high as $\Delta_N^{\rm out-in}\approx15$. Note, however, that in these cases the optimal number of multiplexed units $N_{\text{opt}}$ is also very high ($N_{\text{opt}}\approx 40$). In view of these observations, when an experiment is realized with finite experimental resources with given loss parameters, it is worth using the full statistical treatment presented in this paper to determine which of the proposed incomplete multiplexers yields higher maximal single-photon probability with less optical elements.
\begin{table*}[!tb] \caption{Maximal single-photon probabilities $P^{\text{out}}_{1,\max}$ for SPS based on OIBTM, the required optimal number of multiplexed units $N_\text{opt}^{\text{out}}$, and the optimal input mean photon numbers $\lambda_\text{opt}^{\text{out}}$ at which they can be achieved for various values of the reflection efficiency $V_r$, the transmission efficiency $V_t$ and the detector efficiency $V_D$, and for the general transmission coefficient $V_b=0.98$. \label{tab:2}} \centerline{
\begin{tabular}{c|c|ccc|ccc|ccc}\hline
&&\multicolumn{3}{c|}{$V_t=0.9$}
& \multicolumn{3}{c|}{$V_t=0.95$}
& \multicolumn{3}{c}{$V_t=0.985$}\\\hline $V_r$& $V_D$ & $P_{1,\max}^{\text{out}}$&$N_\text{opt}^{\text{out}}$&$\lambda_{\text{opt}}^{\text{out}}$& $P_{1,\max}^{\text{out}}$&$N_\text{opt}^{\text{out}}$&$\lambda_{\text{opt}}^{\text{out}}$& $P_{1,\max}^{\text{out}}$&$N_\text{opt}^{\text{out}}$&$\lambda_{\text{opt}}^{\text{out}}$ \\\hline 0.92 & 0.8 & 0.685 & 10 & 0.686 & 0.743 & 20 & 0.446 & 0.809 & 40 & 0.315 \\ 0.92 & 0.9 & 0.716 & 10 & 0.78 & 0.772 & 11 & 0.696 & 0.835 & 20 & 0.517 \\ 0.92 & 0.95 & 0.733 & 10 & 0.869 & 0.793 & 10 & 0.836 & 0.855 & 20 & 0.658 \\ 0.92 & 0.98 & 0.744 & 10 & 0.943 & 0.808 & 10 & 0.925 & 0.87 & 20 & 0.824 \\\hline 0.97 & 0.8 & 0.757 & 17 & 0.472 & 0.801 & 36 & 0.262 & 0.862 & 40 & 0.205 \\ 0.97 & 0.9 & 0.787 & 17 & 0.576 & 0.828 & 18 & 0.466 & 0.88 & 38 & 0.279 \\ 0.97 & 0.95 & 0.805 & 17 & 0.711 & 0.845 & 18 & 0.586 & 0.896 & 20 & 0.464 \\ 0.97 & 0.98 & 0.818 & 9 & 0.927 & 0.858 & 10 & 0.87 & 0.908 & 19 & 0.682 \\\hline 0.99 & 0.8 & 0.807 & 34 & 0.324 & 0.852 & 40 & 0.214 & 0.899 & 74 & 0.114 \\ 0.99 & 0.9 & 0.834 & 18 & 0.513 & 0.872 & 33 & 0.314 & 0.911 & 37 & 0.213 \\ 0.99 & 0.95 & 0.854 & 17 & 0.66 & 0.888 & 17 & 0.534 & 0.921 & 36 & 0.269 \\ 0.99 & 0.98 & 0.869 & 17 & 0.82 & 0.901 & 17 & 0.692 & 0.931 & 18 & 0.561\\\hline \end{tabular}} \end{table*}
Let us now assess the performance of SPSs based on OIBTM in more detail. Table \ref{tab:2} shows the maximal single-photon probabilities $P^{\text{out}}_{1,\max}$ for this setup, the required optimal number of multiplexed units $N_\text{opt}^{\text{out}}$, and the optimal input mean photon numbers $\lambda_\text{opt}^{\text{out}}$ at which they can be achieved for various values of the reflection efficiency $V_r$, the transmission efficiency $V_t$ and the detector efficiency $V_D$, and for the general transmission coefficient $V_b=0.98$.
From the table one can deduce that increasing any of the three parameters $V_r$, $V_t$, or $V_D$ leads to an increase in the single-photon probability $P^{\text{out}}_{1,\max}$. The highest single-photon probabilities that can in principle be achieved by output-extended systems is higher than 0.93 for the parameters $V_b=0.98$, $V_D=0.98$, $V_r=0.99$ and $V_t=0.985$. These parameters are considered to be realizable by state-of-the-art technology. At a given value of $V_D$ the increase of the reflection and transmission efficiencies $V_r$ and $V_t$, respectively, is generally accompanied by an increase in the required number of multiplexed units $N_\text{opt}^{\text{out}}$. Obviously, smaller losses corresponding to higher transmissions allow us to use larger optimal system sizes to achieve higher maximal single-photon probabilities $P^{\text{out}}_{1,\max}$ via multiplexing. However, an increase in the detector efficiency $V_D$ corresponding to an increased probability that the single-photon events are selected correctly by the detectors generally leads to a decrease in $N_\text{opt}^{\text{out}}$. This is due to the fact that in this case the multi-photon events can be excluded by the detectors themselves, there is no need for suppressing the occurrence of these events by decreasing the intensity and subsequently increasing the system size, that is, introducing longer arms with higher losses to the multiplexer is not so crucial anymore.
The observations concerning the optimal input mean photon number $\lambda_{\text{opt}}^{\rm out}$ are the opposite. Increasing either the reflection efficiency $V_r$ or the transmission efficiency $V_t$ the optimal input mean photon number $\lambda_{\text{opt}}^{\rm out}$ decreases, while increasing the values of the detector efficiency $V_D$ leads to an increase in the values of $\lambda_{\text{opt}}^{\rm out}$. The finding that the relationship between the optimal number of multiplexed units $N_\text{opt}^{\text{out}}$ and the optimal input mean photon number $\lambda_{\text{opt}}^{\rm out}$ is inverse is not unexpected: having less multiplexed units in the multiplexing system can be compensated by higher values of the input mean photon numbers that guarantees the occurrence of at least one heralding event in the whole multiplexer.
\section{Conclusion} We have proposed two types of incomplete binary-tree multiplexers aiming at increasing the performance of spatially multiplexed single-photon sources. These multiplexers contain incomplete branches either at the input or at the output of the symmetric ones, hence the power-of-two restriction on the number of multiplexed units characterizing symmetric multiplexers is eliminated for them. We applied a general statistical theory that includes all relevant loss mechanisms for analyzing and optimizing these single-photon sources based on these multiplexers realized with general asymmetric routers and photon-number-resolving detectors. We have shown that the use of any of the two proposed multiplexing systems can lead to higher single-photon probabilities than that achieved with complete binary-tree multiplexers applied thus far in experiments. We have found that the performance of single-photon sources based on output-extended incomplete binary-tree multiplexers is better than that of those based on input-extended ones in the considered ranges of the parameters. The single-photon probabilities that can in principle be achieved by output-extended systems are higher than 0.93 when they are realized by state-of-the-art bulk optical elements.
\begin{backmatter} \bmsection{Funding} National Research, Development and Innovation Office, Hungary (Project No.\ K124351 and the Quantum Information National Laboratory of Hungary); European Union (Grants No.\ EFOP-3.6.1.-16-2016-00004, No.\ EFOP-3.6.2-16-2017-00005 and No.\ EFOP-3.4.3-16-2016-00005). \end{backmatter}
\end{document}
|
arXiv
|
The Measurement Equation & Calibration https://casa.nrao.edu/casadocs/casa-5.1.0/reference-material/the-measurement-equation-calibration https://casa.nrao.edu/casadocs/@@site-logo/casa_logo-small.png
The visibilities measured by an interferometer must be calibrated before formation of an image. This is because the wavefronts received and processed by the observational hardware have been corrupted by a variety of effects. These include (but are not exclusive to): the effects of transmission through the atmosphere, the imperfect details amplified electronic (digital) signal and transmission through the signal processing system, and the effects of formation of the cross-power spectra by a correlator. Calibration is the process of reversing these effects to arrive at corrected visibilities which resemble as closely as possible the visibilities that would have been measured in vacuum by a perfect system. The subject of this chapter is the determination of these effects by using the visibility data itself.
The HBS Measurement Equation
The relationship between the observed and ideal (desired) visibilities on the baseline between antennas i and j may be expressed by the Hamaker-Bregman-Sault Measurement Equation Hamaker, Bregman, & Sault (1996) [1] and Sault, Hamaker, Bregman (1996) [2] .
Citation Number
Citation Text
Hamaker, J.P., Bregman, J.D. & Sault, R.J. 1996, A&AS, 117, 137 (ADS).
Sault, R. J.; Hamaker, J. P.; Bregman, J. D. 1996, A&AS, 117, 149 (ADS)
\begin{eqnarray}
\vec{V}_{ij}~=~J_{ij}~\vec{V}_{ij}^{\mathrm{~IDEAL}}
\end{eqnarray}
where$\vec{V}_{ij}$ represents the observed visibility, a complex number representing the amplitude and phase of the correlated data from a pair of antennas in each sample time, per spectral channel. $\vec{V}_{ij}^{\mathrm{~IDEAL}}$ represents the corresponding ideal visibilities, and $J_{ij}$ represents the accumulation of all corruptions affecting baseline $ij$. The visibilities are indicated as vectors spanning the four correlation combinations which can be formed from dual-polarization signals. These four correlations are related directly to the Stokes parameters which fully describe the radiation. The $J_{ij}$ term is therefore a 4$\times$4 matrix.
Most of the effects contained in $J_{ij}$ (indeed, the most important of them) are antenna-based, i.e., they arise from measurable physical properties of (or above) individual antenna elements in a synthesis array. Thus, adequate calibration of an array of $N_{ant}$ antennas forming $N_{ant} (N_{ant}-1)/2$ baseline visibilities is usually achieved through the determination of only $N_{ant}$ factors, such that $J_{ij} = J_i \otimes J_j^{*}$. For the rest of this chapter, we will usually assume that $J_{ij}$ is factorable in this way, unless otherwise noted.
As implied above, $J_{ij}$ may also be factored into the sequence of specific corrupting effects, each having their own particular (relative) importance and physical origin, which determines their unique algebra. Including the most commonly considered effects, the Measurement Equation can be written:
\vec{V}_{ij}~=~M_{ij}~B_{ij}~G_{ij}~D_{ij}~E_{ij}~P_{ij}~T_{ij}~\vec{V}_{ij}^{\mathrm{~IDEAL}}
$T_{ij}~=~$ Polarization-independent multiplicative effects introduced by the troposphere, such as opacity and path-length variation.
$P_{ij}~=~$ Parallactic angle, which describes the orientation of the polarization coordinates on the plane of the sky. This term varies according to the type of the antenna mount.
$E_{ij}~=~$ Effects introduced by properties of the optical components of the telescopes, such as the collecting area's dependence on elevation.
$D_{ij}~=~$ Instrumental polarization response. "D-terms" describe the polarization leakage between feeds (e.g. how much the R-polarized feed picked up L-polarized emission, and vice versa).
$G_{ij}~=~$ Electronic gain response due to components in the signal path between the feed and the correlator. This complex gain term $G_{ij}$ includes the scale factor for absolute flux density calibration, and may include phase and amplitude corrections due to changes in the atmosphere (in lieu of $T_{ij}$). These gains are polarization-dependent.
$B_{ij}~=~$ Bandpass (frequency-dependent) response, such as that introduced by spectral filters in the electronic transmission system
$M_{ij}~=~$ Baseline-based correlator (non-closing) errors. By definition, these are not factorable into antenna-based parts.
Note that the terms are listed in the order in which they affect the incoming wavefront ($G$ and $B$ represent an arbitrary sequence of such terms depending upon the details of the particular electronic system). Note that $M$ differs from all of the rest in that it is not antenna-based, and thus not factorable into terms for each antenna.
As written above, the measurement equation is very general; not all observations will require treatment of all effects, depending upon the desired dynamic range. E.g., instrumental polarization calibration can usually be omitted when observing (only) total intensity using circular feeds. Ultimately, however, each of these effects occurs at some level, and a complete treatment will yield the most accurate calibration. Modern high-sensitivity instruments such as ALMA and JVLA will likely require a more general calibration treatment for similar observations with older arrays in order to reach the advertised dynamic ranges on strong sources.
In practice, it is usually far too difficult to adequately measure most calibration effects absolutely (as if in the laboratory) for use in calibration. The effects are usually far too changeable. Instead, the calibration is achieved by making observations of calibrator sources on the appropriate timescales for the relevant effects, and solving the measurement equation for them using the fact that we have $N_{ant}(N_{ant}-1)/2$ measurements and only $N_{ant}$ factors to determine (except for $M$ which is only sparingly used). Note: By partitioning the calibration factors into a series of consecutive effects, it might appear that the number of free parameters is some multiple of $N_{ant}$, but the relative algebra and timescales of the different effects, as well as the multiplicity of observed polarizations and channels compensate, and it can be shown that the problem remains well-determined until, perhaps, the effects are direction-dependent within the field of view. Limited solvers for such effects are under study; the calibrater tool currently only handles effects which may be assumed constant within the field of view. Corrections for the primary beam are handled in the imager tool. Once determined, these terms are used to correct the visibilities measured for the scientific target. This procedure is known as cross-calibration (when only phase is considered, it is called phase-referencing).
The best calibrators are point sources at the phase center (constant visibility amplitude, zero phase), with sufficient flux density to determine the calibration factors with adequate SNR on the relevant timescale. The primary gain calibrator must be sufficiently close to the target on the sky so that its observations sample the same atmospheric effects. A bandpass calibrator usually must be sufficiently strong (or observed with sufficient duration) to provide adequate per-channel sensitivity for a useful calibration. In practice, several calibrators are usually observed, each with properties suitable for one or more of the required calibrations.
Synthesis calibration is inherently a bootstrapping process. First, the dominant calibration term is determined, and then, using this result, more subtle effects are solved for, until the full set of required calibration terms is available for application to the target field. The solutions for each successive term are relative to the previous terms. Occasionally, when the several calibration terms are not sufficiently orthogonal, it is useful to re-solve for earlier types using the results for later types, in effect, reducing the effect of the later terms on the solution for earlier ones, and thus better isolating them. This idea is a generalization of the traditional concept of self-calibration, where initial imaging of the target source supplies the visibility model for a re-solve of the gain calibration ($G$ or $T$). Iteration tends toward convergence to a statistically optimal image. In general, the quality of each calibration and of the source model are mutually dependent. In principle, as long as the solution for any calibration component (or the source model itself) is likely to improve substantially through the use of new information (provided by other improved solutions), it is worthwhile to continue this process.
In practice, these concepts motivate certain patterns of calibration for different types of observation, and the calibrater tool in CASA is designed to accommodate these patterns in a general and flexible manner. For a spectral line total intensity observation, the pattern is usually:
Solve for $G$ on the bandpass calibrator
Solve for $B$ on the bandpass calibrator, using $G$
Solve for $G$ on the primary gain (near-target) and flux density calibrators, using $B$ solutions just obtained
Scale $G$ solutions for the primary gain calibrator according to the flux density calibrator solutions
Apply $G$ and $B$ solutions to the target data
Image the calibrated target data
If opacity and gain curve information are relevant and available, these types are incorporated in each of the steps (in future, an actual solve for opacity from appropriate data may be folded into this process):
Solve for $G$ on the bandpass calibrator, using $T$ (opacity) and $E$ (gain curve) solutions already derived.
Solve for $B$ on the bandpass calibrator, using $G$, $T$ (opacity), and $E$ (gain curve) solutions.
Solve for $G$ on primary gain (near-target) and flux density calibrators, using $B$, $T$ (opacity), and $E$ (gain curve) solutions.
Apply $T$ (opacity), $E$ (gain curve), $G$, and $B$ solutions to the target data
For continuum polarimetry, the typical pattern is:
Solve for $G$ on the polarization calibrator, using (analytical) $P$ solutions.
Solve for $D$ on the polarization calibrator, using $P$ and $G$ solutions.
Solve for $G$ on primary gain and flux density calibrators, using $P$ and $D$ solutions.
Scale $G$ solutions for the primary gain calibrator according to the flux density calibrator solutions.
Apply $P$, $D$, and $G$ solutions to target data.
Image the calibrated target data.
For a spectro-polarimetry observation, these two examples would be folded together.
In all cases the calibrator model must be adequate at each solve step. At high dynamic range and/or high resolution, many calibrators which are nominally assumed to be point sources become slightly resolved. If this has biased the calibration solutions, the offending calibrator may be imaged at any point in the process and the resulting model used to improve the calibration. Finally, if sufficiently strong, the target may be self-calibrated as well.
General Calibrater Mechanics
The calibrater tasks/tool are designed to solve and apply solutions for all of the solution types listed above (and more are in the works). This leads to a single basic sequence of execution for all solves, regardless of type:
Set the calibrator model visibilities
Select the visibility data which will be used to solve for a calibration type
Arrange to apply any already-known calibration types (the first time through, none may yet be available)
Arrange to solve for a specific calibration type, including specification of the solution timescale and other specifics
Execute the solve process
Repeat 1-4 for all required types, using each result, as it becomes available, in step 3, and perhaps repeating for some types to improve the solutions
By itself, this sequence doesn't guarantee success; the data provided for the solve must have sufficient SNR on the appropriate timescale, and must provide sufficient leverage for the solution (e.g., D solutions require data taken over a sufficient range of parallactic angle in order to separate the source polarization contribution from the instrumental polarization).
|
CommonCrawl
|
Quantum Computing Stack Exchange is a question and answer site for engineers, scientists, programmers, and computing professionals interested in quantum computing. It only takes a minute to sign up.
Is error correction necessary?
Why do you need error correction? My understanding is that error correction removes errors from noise, but noise should average itself out. To make clear what I'm asking, why can't you, instead of involving error correction, simply run the operations, say, a hundred times, and pick the average/most common answer?
error-correction noise
heather♦heather
That doesn't scale well. After a moderately long calculation you're basically left with the maximally mixed state or whatever fixed point your noise has. To scale to arbitrary long calculations you need to correct errors before they become too big.
Here's some short calculation for the intuition given above. Consider the simple white noise model (depolarizing noise), $$\rho'(\varepsilon)= (1-\varepsilon)\rho + \varepsilon \frac{\mathbb{I}}{\operatorname{tr} \mathbb{I}},$$ where $\rho$ is the ideal state (standard notation applies). If you concatenate $n$ such noisy processes, the new noise parameter is $\varepsilon'=1-(1-\varepsilon)^n$, which increases exponentially in the number of gates (or other error sources). If you repeat the experiment $m$-times and assume that the standard error scales as $\frac{1}{\sqrt{m}}$ you see that the number of runs $m$ would be exponentially in the length of your calculation!
M. SternM. Stern
If the error rate were low enough, you could run a computation a hundred times and take the most common answer. For instance, this would work if the error rate were low enough that the expected number of errors per computation was something very small — which means that how well this strategy works would depend on how long and complicated a computation you would like to do.
Once the error rate or the length of your computation become sufficiently high, you can no longer have any confidence that the most likely outcome is that there were zero errors: at a certain point it becomes more likely that you have one, or two, or more errors, than that you have zero. In this case, there is nothing to prevent the majority of the cases from giving you an incorrect answer. What then?
These issues are not special to quantum computation: they also apply to classical computation — it just happens that almost all of our technology is at a sufficiently advanced state of maturity that these issues do not concern us in practise; that there may be a greater chance of your computer being struck by a meteorite mid-computation (or it running out of battery power, or you deciding to switch it off) than of there being a hardware error. What is (temporarily) special about quantum computation is that the technology is not yet mature enough for us to be so relaxed about the possibility of error.
In those times when classical computation has been at a stage when error correction was both practical and necessary, we were able to make use of certain mathematical techniques — error correction — which made it possible to suppress the effective error rate, and in principle to make it as low as we liked. The same techniques surprisingly can be used for quantum error correction — with a little bit of extension, to accommodate the difference between quantum and classical information. At first, before the mid-1990s, it was thought that quantum error correction was impossible because of the continuity of the space of quantum states. But as it turns out, by applying classical error correction techniques in the right way to the different ways a qubit could be measured (usually described as "bit" and "phase"), you can in principle suppress many kinds of noise on quantum systems as well. These techniques are not special to qubits, either: the same idea can be used for quantum systems of any finite dimension (though for models such as adiabatic computation, it may then get in the way of actually performing the computation you wish to perform).
At the time I'm writing this, individual qubits are so difficult to build and to marshall that people are hoping to get away with doing proof-of-principle computations without any error correction at all. That's fine, but it will limit how long their computations can be until the number of accumulated errors is large enough that the computation stops being meaningful. There are two solutions: to get better at suppressing noise, or to apply error correction. Both are good ideas, but it is possible that error correction is easier to perform in the medium- and long-term than suppressing sources of noise.
Niel de BeaudrapNiel de Beaudrap
$\begingroup$ As a quick correction, modern hardware does suffer from non-negligible error rates, and error-correction methods are used. That said, of course your point about the problems being much worse on current quantum computers holds. $\endgroup$ – Nat Mar 14 '18 at 21:12
$\begingroup$ @Nat: interesting. I'm vaguely aware that this may currently be the case for GPUs, and (in a context not involving active computation) RAID arrays are an obvious example as well. But could you describe other hardware platforms for which classical computation must rely on error correction during a computation? $\endgroup$ – Niel de Beaudrap Mar 14 '18 at 23:52
$\begingroup$ Seems like errors are most frequently in networking contexts, followed by disk storage, followed by RAM. Networking protocols and disks routinely implement error-correction tricks. RAM's a mixed bag; server/workstation RAM tends to use error-correcting code (ECC), though consumer RAM often doesn't. Within CPU's, I'd imagine that they have more implementation-specific tactics, but those'd likely be manufacturer secrets. Error-rates in CPU's and GPU's become relevant at an observable level in a few cases, e.g. in overclocking and manufacturer core-locking decisions. $\endgroup$ – Nat Mar 15 '18 at 0:02
$\begingroup$ Actually kinda curious about CPU-type error-correction now.. I mean, the cache would seem prone to the same issues that normal RAM is (unless somehow buffered with more power or something?), which'd presumably be unacceptable in server/workstation contexts. But at the register-level? That'd be something neat to read about; didn't see anything immediately on Google, though I suppose that such info'd likely be a trade secret. $\endgroup$ – Nat Mar 15 '18 at 0:08
Now, adding to M. Stern's answer:
The primary reason as to why error correction is needed for quantum computers, is because qubits have a continuum of states (I'm considering qubit-based quantum computers only, at the moment, for sake of simplicity).
In quantum computers, unlike classical computers each bit doesn't exist in only two possible states. For instance a likely source of error is over-rotation: $\alpha|0\rangle+\beta|1\rangle$ might be supposed to become $\alpha|0\rangle + \beta e^{i\phi}|1\rangle$ but actually becomes $\alpha|0\rangle+\beta e^{i(\phi+\delta)}|1\rangle$. The actual state is close to the correct state but still wrong. If we don't do something about this the small errors will build up over the course of time and eventually become a big error.
Moreover, quantum states are very delicate, and any interaction with the environment can cause decoherence and collapse of a state like $\alpha|0\rangle+\beta|1\rangle$ to $|0\rangle$ with probability $|\alpha|^2$ or $|1\rangle$ with probability $|\beta|^2$.
In a classical computer if say a bit's value is being replicated n-times as follows:
$$0 \to 00000...\text{n times}$$ and $$1 \to 11111...\text{n times}$$
In case after the step something like $0001000100$ is produced it can be corrected by the classical computer to give $0000000000$ because majority of the bits were $0's$ and most probably the intended aim of the initial operation was replicating the $0$-bit $10$ times.
But, for qubits such a error correction method won't work, because first of all duplicating qubits directly is not possible due to the No-Cloning theorem. And secondly, even if you could replicate $|\psi\rangle = \alpha |0\rangle +\beta |1\rangle$ 10-times it's highly probably that you'd end up with something like $(\alpha|0\rangle + \beta |1\rangle)\otimes (\alpha e^{i\epsilon}|0\rangle + \beta e^{i\epsilon'}|1\rangle)\otimes (\alpha e^{i\epsilon_2}|0\rangle + \beta e^{i\epsilon_2'}|1\rangle)\otimes ...$ i.e. with errors in the phases, where all the qubits would be in different states (due to the errors). That is, the situation is no-longer binary. A quantum computer, unlike a classical computer can no longer say that: "Since majority of the bits are in $0$-state let me convert the rest to $0$ !", to correct any error which occurred during the operation. That's because all the $10$ states of the $10$ different qubits might be different from each other, after the so-called "replication" operation. The number of such possible errors will keep increasing rapidly as more and more operations are performed on a system of qubits. M. Stern has indeed used the right terminology in their answer to your question i.e. "that doesn't scale well".
So, you need a different breed of error correcting techniques to deal with errors occurring during the operation of a quantum computer, which can deal not only with bit flip errors but also phase shift errors. Also, it has to be resistant against unintentional decoherence. One thing to keep in mind is that most quantum gates will not be "perfect", even though with right number of "universal quantum gates" you can get arbitrarily close to building any quantum gate which performs (in theory) an unitary transformation.
Niel de Beaudrap mentions that there are clever ways to apply classical error correction techniques in ways such that they can correct many of the errors which occur during quantum operations, which is indeed correct, and is exactly what current day quantum error correcting codes do. I'd like to add the following from Wikipedia, as it might give some clarity about how quantum error correcting codes deal with the problem described above:
Classical error correcting codes use a syndrome measurement to diagnose which error corrupts an encoded state. We then reverse an error by applying a corrective operation based on the syndrome. Quantum error correction also employs syndrome measurements. We perform a multi-qubit measurement that does not disturb the quantum information in the encoded state but retrieves information about the error. A syndrome measurement can determine whether a qubit has been corrupted, and if so, which one. What is more, the outcome of this operation (the syndrome) tells us not only which physical qubit was affected, but also, in which of several possible ways it was affected. The latter is counter-intuitive at first sight: Since noise is arbitrary, how can the effect of noise be one of only few distinct possibilities? In most codes, the effect is either a bit flip, or a sign (of the phase) flip, or both (corresponding to the Pauli matrices X, Z, and Y). The reason is that the measurement of the syndrome has the projective effect of a quantum measurement. So even if the error due to the noise was arbitrary, it can be expressed as a superposition of basis operations—the error basis (which is here given by the Pauli matrices and the identity). The syndrome measurement "forces" the qubit to "decide" for a certain specific "Pauli error" to "have happened", and the syndrome tells us which, so that we can let the same Pauli operator act again on the corrupted qubit to revert the effect of the error.
The syndrome measurement tells us as much as possible about the error that has happened, but nothing at all about the value that is stored in the logical qubit—as otherwise the measurement would destroy any quantum superposition of this logical qubit with other qubits in the quantum computer.
Note: I haven't given any example of actual quantum error correcting techniques. There are plenty of good textbooks out there which discuss this topic. However, I hope this answer will give the readers a basic idea of why we need error correcting codes in quantum computation.
Recommended Further Readings:
An Introduction to Quantum Error Correction and Fault-Tolerant Quantum Computation - Daniel Gottesman
Scheme for reducing decoherence in quantum computer memory - Peter Shor
Recommended Video Lecture:
Mini Crash Course: Quantum Error Correction by Ben Reichardt, University of Southern California
Sanchayan DuttaSanchayan Dutta
$\begingroup$ I'm not sure the fact that there is a continuum of states plays any role. Classical computation with bits would also have the same problems if our technology were less mature, and indeed it did suffer meaningfully from noise at various times in its development. In both the classical and quantum case, noise doesn't conveniently average away under normal circumstances $\endgroup$ – Niel de Beaudrap Mar 14 '18 at 9:49
$\begingroup$ @NieldeBeaudrap It does play a big role. In classical computation you know that you'd have to deal only with two states, beforehand. Just consider an example: In classical computation if a signal of $5$ mV represents "high" (or $1$-state) while $0$ mV theoretically represents the "low" (or $0$-state), if your operation ended up with something like $0.5$ mV it would be automatically be rounded off to $0$ mV because it is much closer to $0$ mV than $5$ mV. But in case of qubits there are an infinite number of possible states and such rounding off doesn't work. $\endgroup$ – Sanchayan Dutta Mar 14 '18 at 9:55
$\begingroup$ Of course you're not wrong when you say that even classical computation suffered from the problem of noise. There's a well established theory of classical error correcting codes too! However, the situation is much more dire in case of quantum computation due to the possibility of infinite number of states of existence of a single qubit. $\endgroup$ – Sanchayan Dutta Mar 14 '18 at 9:58
$\begingroup$ The techniques used for quantum error correction does not involve the fact that the state-space is infinite in any way. The arguments you are making seem to be drawing an analogy between quantum computing and analog computing --- while there is a similarity, it would imply that quantum error correction would be impossible if it were a sound analogy. In contrast, the state-space of many qubits is also like a probability distribution on bit-strings, of which there is also a continuum; and yet just doing error correction on definite bit-strings suffices to suppress error. $\endgroup$ – Niel de Beaudrap Mar 14 '18 at 10:08
$\begingroup$ @glS I have removed the first sentence. You're right. I was interpreting computation in an unrelated way. $\endgroup$ – Sanchayan Dutta Mar 15 '18 at 12:53
noise should average itself out.
Noise doesn't perfectly average itself out. That's the Gambler's Fallacy. Even though noise tends to meander back and forth, it still accumulates over time.
For example, if you generate N fair coin flips and sum them up, the expected magnitude of the difference from exactly $N/2$ heads grows like $O(\sqrt N)$. That's quadratically better than the $O(N)$ you expect from a biased coin, but certainly not 0.
Even worse, in the context of a computation over many qubits the noise doesn't cancel itself nearly as well, because the noise is no longer along a single dimension. In a quantum computer with $Q$ qubits and single-qubit noise, there are $2Q$ dimensions at any given time for the noise to act on (one for each X/Z axis of each qubit). And as you compute with the qubits, these dimensions change to correspond to different subspaces of a $2^Q$ dimensional space. This makes it unlikely for later noise to undo earlier noise, and as a result you're back to $O(N)$ accumulation of noise.
run the operations, say, a hundred times, and pick the average/most common answer?
As computations get larger and longer, the chance of seeing no noise or of the noise perfectly cancelling out rapidly becomes so close to 0% that you can't expect see the correct answer even once even if you repeated the computation a trillion times.
Craig GidneyCraig Gidney
Why do you need error correction? My understanding is that error correction removes errors from noise, but noise should average itself out.
If you built a house or a road and noise was a variance, a difference, with respect to straightness, to direction, it's not solely / simply: "How would it look", but "How would it be?" - a superposition of both efficiency and correctness.
If two people calculated the circumference of a golf ball given a diameter each would get a similar answer, subject to the accuracy of their calculations; if each used several places of decimal it would be 'good enough'.
If two people were provided with identical equipment and ingredients, and given the same recipe for a cake, should we expect identical results?
To make clear what I'm asking, why can't you, instead of involving error correction, simply run the operations, say, a hundred times, and pick the average/most common answer?
You're spoiling the weighing, tapping your finger on the scale.
If you're at a loud concert and try to communicate with the person next to you do they understand you the first time, everytime?
If you tell a story or spread a rumor, (and some people communicate verbatim, some add their own spin, and others forget parts), when it gets back to you does it average itself out and become essentially (but not identically) the same thing you said? - unlikely.
It like crinkling up a piece of paper and then flattening it out.
All those analogies were intended to offer simplicity over exactness, you can reread them a few times, average it out, and have the exact answer, or not. ;)
A more technical explanation of why quantum error correction is difficult but neccessary is explained on Wikipedia's webpage: "Quantum Error Correction":
"Quantum error correction (QEC) is used in quantum computing to protect quantum information from errors due to decoherence and other quantum noise. Quantum error correction is essential if one is to achieve fault-tolerant quantum computation that can deal not only with noise on stored quantum information, but also with faulty quantum gates, faulty quantum preparation, and faulty measurements.".
"Classical error correction employs redundancy. " ...
"Copying quantum information is not possible due to the no-cloning theorem. This theorem seems to present an obstacle to formulating a theory of quantum error correction. But it is possible to spread the information of one qubit onto a highly entangled state of several (physical) qubits. Peter Shor first discovered this method of formulating a quantum error correcting code by storing the information of one qubit onto a highly entangled state of nine qubits. A quantum error correcting code protects quantum information against errors of a limited form.".
RobRob
Thanks for contributing an answer to Quantum Computing Stack Exchange!
Not the answer you're looking for? Browse other questions tagged error-correction noise or ask your own question.
What is the argument that practical quantum computers cannot be built?
What is the leading edge technology for creating a quantum computer with the fewest errors?
Quantum error correction: necessary and sufficient condition
topological error correction concepts
Stabilizer for quantum error correction code
About a necessary condition for quantum error correcting codes
Why does quantum error correction work?
Quantum error correction - approximate vs exact
Smallest Distance-5 Quantum Error Correction Code?
|
CommonCrawl
|
Trading Skills & Essentials Risk Management
Certainty Equivalent
Will Kenton
Will Kenton is an expert on the economy and investing laws and regulations. He previously held senior editorial roles at Investopedia and Kapitall Wire and holds a MA in Economics from The New School for Social Research and Doctor of Philosophy in English literature from NYU.
Gordon Scott
Reviewed by Gordon Scott
Gordon Scott has been an active investor and technical analyst of securities, futures, forex, and penny stocks for 20+ years. He is a member of the Investopedia Financial Review Board and the co-author of Investing to Win. Gordon is a Chartered Market Technician (CMT). He is also a member of CMT Association.
Learn about our Financial Review Board
Trading Skills & Essentials
Ultimate Trading Guide: Options, Futures, and Technical Analysis
Day Trading Introduction
Trading Basic Education
Trading Order Types & Processes
Trading Lifestyle
What Is the Certainty Equivalent?
The certainty equivalent is a guaranteed return that someone would accept now, rather than taking a chance on a higher, but uncertain, return in the future. Put another way, the certainty equivalent is the guaranteed amount of cash that a person would consider as having the same amount of desirability as a risky asset.
The certainty equivalent represents the amount of guaranteed money an investor would accept now instead of taking a risk of getting more money at a future date.
The certainty equivalent varies between investors based on their risk tolerance, and a retiree would have a higher certainty equivalent because they're less willing to risk their retirement funds.
The certainty equivalent is closely related to the concept of risk premium or the amount of additional return an investor requires to choose a risky investment over a safer investment.
What Does the Certainty Equivalent Tell You?
Investments must pay a risk premium to compensate investors for the possibility that they may not get their money back and the higher the risk, the higher premium an investor expects over the average return.
If an investor has a choice between a U.S. government bond paying 3% interest and a corporate bond paying 8% interest and he chooses the government bond, the payoff differential is the certainty equivalent. The corporation would need to offer this particular investor a potential return of more than 8% on its bonds to convince him to buy.
A company seeking investors can use the certainty equivalent as a basis for determining how much more it needs to pay to convince investors to consider the riskier option. The certainty equivalent varies because each investor has a unique risk tolerance.
The term is also used in gambling, to represent the amount of payoff someone would require to be indifferent between it and a given gamble. This is called the gamble's certainty equivalent.
Example of How to Use the Certainty Equivalent
The idea of certainty equivalent can be applied to cash flow from an investment. The certainty equivalent cash flow is the risk-free cash flow that an investor or manager considers equal to a different expected cash flow which is higher, but also riskier. The formula for calculating the certainty equivalent cash flow is as follows:
Certainty Equivalent Cash Flow = Expected Cash Flow ( 1 + Risk Premium ) \text{Certainty Equivalent Cash Flow} = \frac{\text{Expected Cash Flow}}{\left(1\ +\ \text{Risk Premium} \right )} Certainty Equivalent Cash Flow=(1 + Risk Premium)Expected Cash Flow
The risk premium is calculated as the risk-adjusted rate of return minus the risk-free rate. The expected cash flow is calculated by taking the probability-weighted dollar value of each expected cash flow and adding them up.
For example, imagine that an investor has the choice to accept a guaranteed $10 million cash inflow or an option with the following expectations:
A 30% chance of receiving $7.5 million
A 50% chance of receiving $15.5 million
A 20% chance of receiving $4 million
Based on these probabilities, the expected cash flow of this scenario is:
Expected Cash Flow = 0 . 3 × $ 7 . 5 Million + 0 . 5 × $ 1 5 . 5 Million + 0 . 2 × $ 4 Million \begin{aligned} \text{Expected Cash Flow} &= 0.3\times\$7.5\text{ Million}\\&\quad + 0.5\times \$15.5\text{ Million}\\&\quad + 0.2\times\$4\text{ Million}\\ &=\$10.8 \text{ Million} \end{aligned} Expected Cash Flow=0.3×$7.5 Million+0.5×$15.5 Million+0.2×$4 Million
Assume the risk-adjusted rate of return used to discount this option is 12% and the risk-free rate is 3%. Thus, the risk premium is (12% - 3%), or 9%. Using the above equation, the certainty equivalent cash flow is:
Certainty Equivalent Cash Flow = $ 1 0 . 8 Million ( 1 + 0 . 0 9 ) \begin{aligned} \text{Certainty Equivalent Cash Flow} &= \frac{\$10.8 \text{ Million}}{\left(1 + 0.09 \right )} \\ &=\$9.908 \text{ Million} \end{aligned} Certainty Equivalent Cash Flow=(1+0.09)$10.8 Million
Based on this, if the investor prefers to avoid risk, he should accept any guaranteed option worth more than $9.908 million.
Bond Yield Formula and Calculation
Bond yield is the amount of return an investor will realize on a bond, calculated by dividing its face value by the amount of interest it pays.
How to Calculate the Weighted Average Cost of Capital (WACC)
The weighted average cost of capital (WACC) calculates a firm's cost of capital, proportionately weighing each category of capital.
Discounted Cash Flow (DCF)
Discounted cash flow (DCF) is a valuation method used to estimate the attractiveness of an investment opportunity.
The internal rate of return (IRR) is a metric used in capital budgeting to estimate the return of potential investments.
Perpetuity Definition
Perpetuity, in finance, is a constant stream of identical cash flows with no end, such as an annuity.
What Is Annualized Total Return?
Annualized total return gives the yearly return of a fund calculated to demonstrate the rate of return necessary to achieve a cumulative return.
The Investopedia Guide to Watching 'Billions'
What Is the Formula for Calculating Net Present Value (NPV)?
Understanding the Time Value of Money
Forex Trading Strategy & Education
Using Interest Rate Parity to Trade Forex
How to Calculate Return on Investment (ROI)
|
CommonCrawl
|
\begin{definition}[Definition:Right Order Topology on Strictly Positive Integers]
Let $\Z_{>0}$ be the set of strictly positive integers.
For $n \in \Z_{>0}$, let $O_n$ denote the set defined as:
:$O_n := \set {x \in \Z_{>0}: x \ge n}$
Let $\tau$ be the subset of the power set $\powerset {\Z_{>0} }$ be defined as:
:$\tau := \O \cup \set {O_n: n \in \Z_{>0} }$
Then $\tau$ is the '''right order topology on $\Z_{>0}$'''.
Hence the topological space $T = \struct {\Z_{>0}, \tau}$ can be referred to as the '''right order space on $\Z_{>0}$'''.
\end{definition}
|
ProofWiki
|
Doubt about Taylor series: do successive derivatives on a point determine the whole function?
I'm currently relearning Taylor series and yersterday I thought about something that left me puzzled. As far as I understand, whenever you take the Taylor series of any function $f(x)$ around a point $x = a$, the function is exactly equal to its Taylor series, that is:
$$ f(x) = \sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x-a)^n $$
For example, if we take $f(x) = e^x$ and $x = 0$, we obtain: $ e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!} $
My doubt is: the only variables in the Tayor series formula are $f(a), f'(a), f''(a),$ etc., that is, the successive derivatives of the function $f$ evaluated in one point $x = a$. But the Taylor series of $f(x)$ determine the whole function! How is it possible that the successive derivatives of the function evaluated in a single point determine the whole function? Does this mean that if we know the values of $f^{(n)}(a)$, then $f$ is uniquely determined? Is there an intuition as to why the succesive derivatives of $f$ on a single point encode the necessary information to determine $f$ uniquely?
Maybe I'm missing a key insight and all my reasoning is wrong, if so please tell where is my mistake.
calculus functions taylor-expansion
David O.David O.
$\begingroup$ Ignoring convergence issues, intuitively think about a person walking on a path. In order to know where the person is immediately going next, you need to know their position ($f(a)$) and which direction they are headed ($f'(a)$). Now, if you knew which way the person was planning to turn, or curve their path ($f''(a)$), then you could predict their path a little farther. Each increment of knowledge on the persons future trajectory gives you more accuracy on where they will be arbitrarily far into the future. $\endgroup$ – Ninad Munshi Nov 14 '19 at 9:24
$\begingroup$ @NinadMunshi. Very good explanation ! May I reuse it ? Cheers :-) $\endgroup$ – Claude Leibovici Nov 14 '19 at 9:40
$\begingroup$ Thanks for the intuitive explanation @NinadMunshi, it really helped me in understanding how is is possible that, if the function is analytic, then it is uniquely determined by the derivatives on a point. $\endgroup$ – David O. Nov 14 '19 at 9:54
$\begingroup$ @ClaudeLeibovici Absolutely! My explanation would not do much good if it just collected dust on stackexchange. $\endgroup$ – Ninad Munshi Nov 14 '19 at 10:29
You're right, in general $f$ is not determined by its derivatives at one single point. Functions satisfying this condition are called analytic. But not all smooth functions are analytic, for example
$$x\mapsto\left\{\begin{array}{c}e^{-\frac{1}{x^2}}, x>0\\0, x\leq 0\end{array}\right.$$ is a smooth function and the derivatives at zero are all zero, hence the Taylor series developed at zero does not determine the function.
Furthermore the exact statement of Taylor's theorem is quite different from what you said. It is as follows:
If $f\in C^{k+1}(\mathbb{R})$, then $$f(x)=\sum_{n=0}^k f^{(n)}(a)(x-a)^n\frac{1}{n!} + f^{(k+1)}(\xi)\frac{1}{(k+1)!}(x-a)^{k+1}$$
If you now take $k\rightarrow\infty$ it is in general not clear, that this error term converges to zero.
humanStampedisthumanStampedist
$\begingroup$ As an additional note: The existence of non-analytic smooth functions is incredibly important in advanced analysis and differential geometry, where you need test functions ($C^{\infty}_c$-functions are necessarily non-analytic unless they are $0$) to develop weak derivatives by duality, and you need partitions of unity to glue local properties. $\endgroup$ – WoolierThanThou Nov 14 '19 at 9:33
$\begingroup$ Thanks for the explanation, I was not familiar with the concept of analytical functions. Is there any theorem that characterises such functions? $\endgroup$ – David O. Nov 14 '19 at 9:56
$\begingroup$ @DavidO. That is quite a big topic and maybe worth a new question. Note that the examples here are all or nothing: your example converges for all $x$ and the example here converges for no positive $x$. However, it is possible that the series converges up to a certain distance from $a$ called the radius of converge. Are you familiar with complex numbers? They help the understanding of this topic. $\endgroup$ – badjohn Nov 14 '19 at 10:33
$\begingroup$ @DavidO. In some ways, differentiation is similar with complex functions but one big difference is that in the complex world a once differentiable function is analytic. $\endgroup$ – badjohn Nov 14 '19 at 10:45
$\begingroup$ @David O: The easiest characterization is done by using complex analysis. In a sense, complex analysis is the "most fully developed" theory of analytic functions. A real function is real analytic if and only if there exists a complex function which is complex-differentiable (a more elegant condition) in a neighborhood of the real line and equals the given real function at that point (up to the obligatory "type coercion" of course :) ). $\endgroup$ – The_Sympathizer Nov 15 '19 at 4:06
Functions which are the sum of their Taylor series within the interval (or disk for functions of a complex variable) of convergence are known as analytic functions. Many basic elementary functions are analytic: $\;\exp, \sin,\cos,\sinh,\cosh $ and of course polynomials are analytic on $\mathbf R$ (or $\mathbf C$).
It is not true that, in general, an infinitely differentiable function of a real variable is analytic on the interval of convergence of its Taylor series, as @humanStampedist's example shows.
However, for a function of a complex variable, simply being differentiable suffices to ensure the function is analytic (one usually says holomorphic in this case). This is due to the very strong constraints of the Cauchy-Riemann equations.
BernardBernard
$\begingroup$ Thanks for these insights @Bernard. I'll take a look on those topics in order to learn more about analytical functions $\endgroup$ – David O. Nov 14 '19 at 10:43
$\begingroup$ @DavidO.- just FYI, the term is "analytic", not "analytical". If you search for "analytical", you will have to rely on the search engine's ability to correctly determine what you are really after. $\endgroup$ – Paul Sinclair Nov 14 '19 at 17:56
HumanStampedist has adequately answered the question. I'd like to mention that, just as there are continuous functions that are nowhere differentiable, such as the Weierstrass function, there are smooth functions (all $n$th derivatives exist at every point) that are nowhere analytic, i.e. at no point does the Taylor series converge to the original function. An example is the Fabius function.
Robert FurberRobert Furber
One doesn't have to try very hard to find a function that does not agree with its Taylor series everywhere. The absolute value function is a familiar enough function. The Taylor series of $|x|$ at $x = 1$ is $x$. (The constant term is zero and all the higher degree terms are zero. If you ignore the left half of the graph of $|x|$, you should see that this function is "trying" to be a straight line in any reasonably small and/or bounded-above-zero-on-the-left open neighborhood of the expansion point, $1$.)
This Taylor expansion is identical to the function on $x \geq 0$, and is hilariously wrong for $x < 0$. However, on expanding a Taylor series at any point on the left half of the real line gives $-x$. This is identical to the function on $x \leq 0$ and hilariously wrong on the right half of the real line.
Why did the Taylor series not "work" everywhere? In any little neighborhood of an $x$ that does not include $0$, the function $|x|$ looks like either a line with slope $1$ or a line with slope $-1$, so this is all the derivatives can see. The sudden change in behaviour at $x = 0$ is not signalled in the derivatives anywhere (except that none of the derivatives exist at $x = 0$). It's almost as if the undefinedness at $x=0$ acts as a barrier -- the Taylor series on one side of that barrier does not replicate behaviour from the other side. (... except in carefully contrived accidents, like $\frac{x^2}{x}$ which is undefined at $0$ so has no derivatives there, but agrees with any of its Taylor series.)
Eric TowersEric Towers
$\begingroup$ Wow, thank you for this example! It's a very simple function yet it reflects the ideas on why the Taylor series doesn't equal the function. One question though: in your example it is clear that the derivatives do not exist at $x=0$, and I can see how this acts as a "barrier". However, in the example proposed by @HumanStampedist, the derivatives do exist on $x = 0$. How would we explain then that the Taylor series does not equal the function in an intuitive way? $\endgroup$ – David O. Nov 15 '19 at 10:56
$\begingroup$ @DavidO. : HumanStampedists example does have a barrier, infinitesimally close to $x = 0$ ... in the complex plane. See [math.stackexchange.com/questions/717676/… for more on this. $\endgroup$ – Eric Towers Nov 15 '19 at 11:13
$\begingroup$ @DavidO. : I've added some plots to the cited answer to perhaps mke it easier to see that there is a barrier pressed right up against the origin along the imaginary axis. $\endgroup$ – Eric Towers Nov 17 '19 at 21:10
Not the answer you're looking for? Browse other questions tagged calculus functions taylor-expansion or ask your own question.
Why doesn't the identity theorem for holomorphic functions work for real-differentiable functions?
Is it possible to detect periodicity of an analytic function from its Taylor series coefficients at a point?
Is this correct reasoning about Taylor series?
Taylor series of a composed function
Taylor Series of Rational Function
Building a Taylor series for the Runge function
Determine the Taylor series using the Geometic Series
Lets apply Taylor series to the function $e^x$
Multivariable calculus: How do I find the Taylor series for a function about a certain point?
Determine if a series converges, cosine, Taylor
|
CommonCrawl
|
Original research | Open | Published: 12 February 2018
Statistical evaluation of test-retest studies in PET brain imaging
Richard Baumgartner ORCID: orcid.org/0000-0003-3330-84771,
Aniket Joshi2,
Dai Feng1,
Francesca Zanderigo3,4 &
R. Todd Ogden4,5
EJNMMI Researchvolume 8, Article number: 13 (2018) | Download Citation
Positron emission tomography (PET) is a molecular imaging technology that enables in vivo quantification of metabolic activity or receptor density, among other applications. Examples of applications of PET imaging in neuroscience include studies of neuroreceptor/neurotransmitter levels in neuropsychiatric diseases (e.g., measuring receptor expression in schizophrenia) and of misfolded protein levels in neurodegenerative diseases (e.g., beta amyloid and tau deposits in Alzheimer's disease). Assessment of a PET tracer's test-retest properties is an important component of tracer validation, and it is usually carried out using data from a small number of subjects.
Here, we investigate advantages and limitations of test-retest metrics that are commonly used for PET brain imaging, including percent test-retest difference and intraclass correlation coefficient (ICC). In addition, we show how random effects analysis of variance, which forms the basis for ICC, can be used to derive additional test-retest metrics, which are generally not reported in the PET brain imaging test-retest literature, such as within-subject coefficient of variation and repeatability coefficient. We reevaluate data from five published clinical PET imaging test-retest studies to illustrate the relative merits and utility of the various test-retest metrics. We provide recommendations on evaluation of test-retest in brain PET imaging and show how the random effects ANOVA based metrics can be used to supplement the commonly used metrics such as percent test-retest.
Random effects ANOVA is a useful model for PET brain imaging test-retest studies. The metrics that ensue from this model are recommended to be reported along with the percent test-retest metric as they capture various sources of variability in the PET test-retest experiments in a succinct way.
Positron emission tomography (PET) is a molecular imaging technology used for in vivo measurement of metabolism and neurochemistry, including measurement of cerebral blood flow, glucose metabolism, oxygen utilization, and density of neuroreceptors or other molecular targets [1, 2]. As an integral component of the validation of novel PET tracers, a test-retest experiment is usually first conducted to measure repeatability of the measurements.
The main purpose of a test-retest experiment is to inform about within-subject variability, i.e., how close the measurements are when they are obtained repeatedly on the same subject under identical conditions. It is common then to compare these measures of repeatability—certainly, when considering multiple methods of processing and/or modeling PET data. Often, standardized measures of repeatability are used as general metrics to help judge the general utility of a tracer, although it is not obvious that it is appropriate to compare these measures across tracers or across molecular targets.
The test-retest experiment is most naturally relevant for evaluating a tracers' utility for use in a study involving multiple measurements on the same subject, e.g., an occupancy study or a study measuring the effect of some intervention. As we will summarize here, most of the indices used to summarize the results of test-retest experiments measure quantities that are important for such experiments. Note, however, that these indices by themselves do not provide all the useful information when considering other types of PET studies, i.e., a cross-sectional study of two groups of subjects.
Still, the test-retest repeatability of a tracer is an important criterion to help select a tracer for a particular target among multiple available tracers [3], although of course several other criteria (e.g., robust radiochemistry, large specific-to-nonspecific signal, and absence of off-target binding) are also important factors. Going beyond tracer evaluation, test-retest studies also provide useful data for determining the optimal approach among various quantification techniques (e.g., modeling strategies or outcome measures) for a given tracer. Test-retest studies are also useful for understanding the relative variability among multiple region of interests (ROIs).
In general, test-retest repeatability usually refers to measuring the variability when repeated measurements are acquired on the same experimental unit under identical (or nearly identical) conditions [4]. Various metrics have been proposed in the statistical and PET literature to evaluate test-retest experiments such as percent test retest (PTRT), intraclass correlation coefficient (ICC), within-subject coefficient of variation (WSCV) or repeatability coefficient (RC), and we will describe these in some detail in the next section. Briefly, these metrics can be classified as either scaled or unscaled indices of agreement [5]. Unscaled indices of agreement summarize the test-retest repeatability based on differences of original measurements and therefore are obtained on the original unit of measurement, example of which would be RC. In contrast, scaled indices of agreement are normalized with respect to some given quantity and are therefore (unitless) relative measures. Common examples of scaled measures are "percent test retest" which is commonly reported in PET studies.
A very recent article by Lodge [6], assesses repeatability of very common PET-based measurements in oncology applications focusing on only one tracer (18F–FDG) and one summary measure (standardized uptake value (SUV)). In that paper, Lodge reviews multiple relevant test-retest studies that report results in inconsistent ways depending on several repeatability measures, and so syntheses of these studies is quite challenging. This illustrates the need to critically evaluate the various measures that are reported in the PET imaging literature. Our objective here is to provide a comprehensive assessment of test-retest evaluations in PET brain imaging, in particular with respect to the assumptions of the random effects ANOVA model that underlies the ICC statistic. Similar critical reviews of repeatability experiments have recently been conducted for other modalities (e.g., electrocardiogram data [7]). To illustrate the utility of the different test-retest metrics, we reevaluated data from five published brain PET test-retest studies in humans. Finally, we provide a discussion of the merits and applicability of the test-retest metrics for future PET brain imaging studies.
Description of the data sets
We considered five published brain PET test-retest datasets [8,9,10,11,12], whose details are reported in Table 1.
Table 1 Summary table of the considered clinical brain PET test-retest data sets
Statistical model for test-retest
The most basic model for a test-retest experiment is the standard random effects ANOVA:
$$ {y}_{ij}=\mu +{s}_i+{e}_{ij} $$
where y ij is the PET outcome measure corresponding to scan j observed on the i-th subject (i = 1…n) (typically two repeated scans (j = 1,2) are obtained in brain PET test-retest studies), s i is the subject-level random effect, and e ij is the measurement error, with s i and e ij mutually independent and normally distributed:
$$ {\displaystyle \begin{array}{l}{s}_i\sim N\left(0,{\sigma}_s^2\right)\kern1em \\ {}{e}_{ij}\sim N\left(0,{\sigma}_e^2\right)\end{array}} $$
where σ s and σ e are the between- and within-subject standard deviations, respectively.
Estimation of the parameters μ, σe, and σs in model (1) is described in Appendix for completeness. The computation was implemented using the R package "agRee" [13]. There are two scaled indices and one unscaled index of agreement that naturally ensue from model (1) that were proposed for characterization of a test-retest experiment:
1) the WSCV [14, 15], defined as
$$ \mathrm{WSCV}=\frac{\sigma_e}{\mu } $$
2) the ICC, defined as
$$ \mathrm{ICC}=\frac{\sigma_S^2}{\sigma_S^2+{\sigma}_e^2} $$
3) an unscaled RC, that is given as
$$ \mathrm{RC}=\sqrt{2}\kern0.5em {z}_{1-\alpha /2}\kern0.5em {\sigma}_e, $$
where z1−α/2 is the 1−α/2 quantile of standard normal distribution. The RC can also be interpreted as the smallest detectable difference (SDD) between a test and retest measurement for a given subject. It is defined as a 100(1−α/2)% quantile of the distribution of test-retest differences. Thus, this quantile represents limits of a typical range containing large proportion (e.g., 95%) of the distribution of test-retest differences (with α = 0.05, z1−α/2 = 1.96 [15]).
As described in the "Introduction" section, percent test-retest (PTRT) is a ubiquitous measure in PET brain imaging although it is not often used in other related fields. In early PET test-retest papers, signed (or raw) mean normalized test-retest differences were considered [16, 17], but later authors generally used the absolute values of the normalized differences instead [18]. Following this latter definition, PTRT is calculated as follows:
$$ \mathrm{PTRT}=\frac{1}{n}\sum \limits_{i=1}^n\left|2\frac{y_{i2}-{y}_{i1}}{y_{i2}+{y}_{i1}}\right| $$
Where n is the number of subjects in the test-retest study and y i1 and y i2 are the estimated PET outcome measures obtained for the i-th subject in a given region in the test and in the retest scan, respectively.
Bland-Altman plot
Bland-Altman plots show mean vs. difference of test-retest observations for each subject involved in the study and therefore provide a comprehensive visual assessment of the data [19].
PET test-retest data
The total volume of distribution (V T ) [20] was considered as the PET outcome measure that was calculated using three different quantification strategies, one- (1TC) and two-tissue compartment (2TC) models [21], and a graphical approach, the likelihood estimation in graphical analysis (LEGA) [22].It should be noted that the purpose here of considering three different quantification approaches is not to revisit the question of determining the "best" modeling approach for each tracer. This question has been adequately addressed in the original manuscripts for the respective tracers. Rather, multiple quantification approaches provide additional datasets to illustrate how the different test-retest metrics can be applied and what attributes of the data and quantification method can be measured. Ten ROIs were considered in common across all five data sets: anterior cingulate, amygdala, dorsal caudate, dorsolateral prefrontal cortex, gray matter cerebellum, hippocampus, insula, midbrain, parietal lobe, and ventral striatum. In the case of [11C]WAY-100635, an additional ROI, the white matter cerebellum, was considered [11], but not included in this analysis to maintain the same ROIs across all tracers. The test-retest variability is a result of noise in the ROI and in the arterial input function and is impacted by the size of the ROI. Analysis in this paper does not consider the ROI size as a factor, since ROI-size is the same for different tracers binding to the same target.
The variability of PTRT and WSCV across datasets and considered ROIs is shown in Fig. 1. For a given dataset, each point in this plot represents a particular ROI. Both metrics show similar values (between 5 and 20%) for most datasets and for the majority of ROIs. Whether the test-retest metric of any given tracer is adequate for any particular clinical study depends on the effect size being investigated. Among the relevant ROIs, the ROIs with better test-retest reliability will be typically used for the main analysis. For some datasets, measures of reliability may be different depending on the selected modeling approach. As an example, results for the DS1 dataset ([11C]CUMI-101) are summarized below. According to the PTRT and WSCV criteria, for most datasets, the 2TC model shows worse test-retest reliability than the more parsimonious 1TC model, as expected. Graphical approaches (such LEGA) tend to be more robust than kinetic models to presence of noise in the data, and thus usually yield fewer or no outliers, which can influence test-retest repeatability. Among kinetic models, the 2TC model is more prone to generating outliers than the more parsimonious 1TC model. To demonstrate how various ROIs are performing across different test-retest metrics, they are plotted in the same color across datasets and fitting methods.
Percent test-retest (PTRT, upper panel) and within-subject coefficient of variation (WSCV, lower panel) across the considered datasets. Each point in the plot, for a given dataset, represents a particular region of interest (ROI) so that the plot represents the variability of PTRT or WSCV across the considered ROIs. For each data set, DS1, DS2, DS3, DS4, and DS5, there were three quantification methods investigated (1, 2, and L denote one-tissue compartment model, two-tissue compartment model, and likelihood estimation in graphical analysis, respectively)
The ICC values obtained across datasets (shown in Fig. 2 in the same fashion as in Fig. 1) provide a similar picture as PTRT and WSCV in terms of test-retest repeatability. The ICC ranges between very high (close to 1) and lower (ICC value ~ 0.5). Again, outlying ROIs for the 2TC model in datasets DS1 and DS3 considerably reduce the corresponding ICC.
Intraclass correlation coefficient (ICC) across the considered datasets. Each point in the plot, for a given dataset, represents a particular region of interest (ROI) so that the plot represents the variability of ICC across the considered ROIs. For each data set, DS1, DS2, DS3, DS4, and DS5, there were three quantification methods investigated (1, 2, and L denote one-tissue compartment model, two-tissue compartment model, and likelihood estimation in graphical analysis, respectively)
Figure 3 shows RC as an unscaled index of agreement along with the grand mean μ derived from random effects ANOVA (Eq. 1). The outlying ROIs appear as influential points in the plots.
Repeatability coefficient (RC) and grand mean across the considered datasets (upper panel and lower panel respectively). Each point in the plot, for a given dataset, represents a particular region of interest (ROI) so that the plot represents the variability of RC across the considered ROIs. For each data set, DS1, DS2, DS3, DS4, and DS5, there were three quantification methods investigated (1, 2, and L denote one-tissue compartment model, two-tissue compartment model, and likelihood estimation in graphical analysis, respectively)
A key utility of the test-retest metrics is selecting a tracer among many for a particular target. For example, [11C]WAY-100635 and [11C]CUMI-101 are both tracers for the serotonin 1A receptor. The ICC, PTRT, and WSCV show lower test-retest variability for [11C]CUMI-101 compared to [11C]WAY-100635 (Figs. 1 and 2), indicating that [11C]CUMI-101 considering only the test-retest repeatability aspect would be preferred of the tracer, for the serotonin 1A receptor.
In order to investigate various real-life scenarios, a graphical representation of the data by means of the Bland-Altman plots is shown for a particular dataset ([11C]CUMI-101) and a particular ROI (amygdala) across different quantification strategies (Fig. 4). Ninety-five percent of differences between test-retest measures are expected to lie between the limits of agreement, and these lines indicate if the two measures can be interchanged without altering the clinical interpretation [15].
Bland-Altman plots of the [11C]CUMI-101 dataseDS1 for a 1TC model, b 2TC model, and c LEGA approach
The other metrics obtained from this particular dataset and ROI are reported in Table 2. From Fig. 4a, c as well as Table 2, it can be seen that the test-retest repeatability for the 1TC model and LEGA is very good across both scaled indices of agreement (WSCV and PTRT are about 5%, and ICC is higher than 0.93) and where the Bland-Altman plots show random variation across the sampling range, albeit with small bias for both methods. Good repeatability can also be observed in the ratio of the RC and grand mean μ, as this ratio is obtained as WSCV scaled by a constant factor. For the 2TC model, however, the test-retest repeatability is quite poor. As shown in the Bland-Altman plot in Fig. 4b, there is an influential, outlying observation for a particular subject. This may be due to poor identifiability of one of the four kinetic rate parameters of the 2TC model, which results in unreasonably high value for that ROI VT and thus may cause deterioration of the overall test-retest metrics. Notably, the PTRT appears to be less sensitive to the outlier. This may be explained by the local as opposed to global scaling of the PTRT and WSCV, respectively. This potential insensitivity of PTRT to outliers values underscores the utility of Bland-Altman plots to visualize test-retest data. This result also strongly underscores the value of reporting more than just PTRT in PET test-retest studies, since this metric attenuates a poor test-rest datapoint, while ICC and WSCV appropriately highlight its influence.
Table 2 Agreement indices for amygdala in the [11C]CUMI-101 dataset for the three considered quantification approaches
Our main goal was to investigate current approaches to the evaluation of test-retest experiments in PET brain imaging from a statistical point of view and to provide insights and guidance for using indices of agreement in addition to the typically reported PTRT metric. In this evaluation, the random effects ANOVA model underpins the rationale for most metrics and we found it to be a useful model for brain PET imaging, as it describes and quantifies the test-retest PET experiments in a succinct way, while at the same time capturing various random variations present in the data. With respect to random effects ANOVA, three metrics obtained from the model (ICC, RC, and WSCV) reveal several aspects of the data. The ICC provides information about distinguishability of the subjects [23]. As ICC is a ratio of between-subject variance to total variance, it quantifies the agreement of the test-retest readings (given by the within-subject standard deviation (WSSD)) relative to the spread of the subjects (characterized by between-subject standard deviation). The higher the between-subject variability is, the better the distinguishability. As ICC depends on the between-subject variability expressed by the between-subject deviation, it has been pointed out that care needs to be paid to comparisons of the ICC across groups for which the between-subject variability may be different [23]. WSCV provides information about the agreement between test-retest readings with respect to the overall signal (estimated as population mean from the random effects ANOVA model). RC is an unscaled index of agreement, reflecting agreement between the test-retest readings proportional to the WSSD (which is estimated as a square root of the within-subject mean sum of squares or WSMSS).
In PET imaging literature, several test-retest outcome metrics are commonly reported, but there has been no general consensus as to which outcome metrics should be used. We found it useful to classify the metrics based on the underlying statistical model, such as random effects ANOVA vs. other metrics. The most popular metrics based on random effect ANOVA are ICC and WSMSS [8,9,10,11,12, 24, 25]. WSMSS is directly related to the RC, as square root of WSMSS and is an estimate of the WSSD. WSCV, which also ensues for random effect ANOVA model, is only rarely reported in test-retest studies in PET brain imaging [11]. In PET test/retest studies, ICC is usually calculated assuming a one-way ANOVA (4). However, in some cases, a two-way mixed effect model has also been applied [26]. Since typical test/retest studies consist of two images per subject, we generally recommend calculating ICC according to the one-way model.
The most commonly used test-retest metric in PET imaging is PTRT (reported virtually in all PET imaging studies with test-retest experiment). PTRT is obtained from mean normalized differences of test-retest samples. With respect to the random effects ANOVA model, PTRT does not estimate any parameter or function of parameters of the model. Using a first order Taylor expansion (see also Appendix), it can be shown that the mean normalized differences are akin to taking log transform of the data. Therefore, it is expected that the PTRT will not be as sensitive to outliers, as these will be scaled "locally" by the corresponding test-retest mean. Also, due to local scaling, the spread of PTRT is small compared to ICC where the scaling is global. This may significantly underestimate the test-retest repeatability measured with PTRT as seen in the analysis of [11C]CUMI dataset (Table 2). Both PTRT and WSCV provide an intuition to the tracer's limit on detecting differences (e.g., a difference smaller than PTRT and WSCV is unlikely to be detected). The overall rank ordering of regions in terms of test-retest reliability is similar between PTRT and WSCV. Due to inherent small sample size in PET reliability experiments, confidence intervals for the test-retest metrics will generally be fairly wide. Thus, small differences in these measures may not be meaningful. As a general recommendation, the random effect ANOVA model is a useful model for the PET test-retest studies and therefore measures ensuing from it should be reported together with the PTRT, in the case of two repeated measures (one test and one retest). Although more than two repetitions for the PET imaging are not typical, it is worth to note that PTRT is not straightforwardly generalizable for more than two test/retest periods, whereas the ANOVA indices can be applied naturally regardless of the number of repeated observations.
Test-retest metrics that are directly derived from the random ANOVA model (WSCV, ICC, and RC) can be also used for sample size calculation when planning a study that involves multiple PET scans per subject. A method for sample size calculation for ICC was suggested in [27], which is based on determination of necessary sample size to achieve pre-specified precision of ICC given by a corresponding confidence interval width. This approach can be used in a straightforward way also for the WSCV and RC indices, but not for the PTRT. We emphasize that while these summaries are quite valuable for planning studies that involve multiple PET scans per subject, they are not directly relevant for planning cross-sectional studies. For example, for a pre-post study design, within-subject standard deviation obtained from a test-retest experiment may be used for sample size calculation given an assumed effect size (mean difference between pre- and post- periods) as shown in [28].
Bland-Altman plots represent a mainstay in the graphical display of test-retest data. However, they are rarely used in PET brain imaging [29]. Bland-Altman plots should be used as a first step in the analysis as they may be helpful in better understanding the dependence of variability on the signal strength as well as potential bias between test and retest measurements.
When characterizing test-retest properties of a particular tracer, one may aim at an overall measure across several ROIs or at a region-specific measure of reliability in a priori regions with hypothesized or confirmed biological relevance to the population and/or application at hand. In our investigation, we found that some ROIs may exhibit better performance than others, so ROI-wise comparisons are worth considering. In addition, various ROIs may show different uptake characteristics that influence their noise properties (e.g., high-binding vs. low-binding ROIs), and in that case, test-retest properties could be investigated region-by-region; however, pooling all ROIs into an aggregate test-retest metric may also be carried out if there is an application specific requirement. The difference in ROI-size influences the noise in the region which is the cause of test-retest repeatability metric. Thus, the ROI size will not have an impact on the conclusions drawn from test-retest repeatability metrics if the image processing is performed in a uniform fashion across studies, which was the case in the datasets chosen for this paper.
All the scaled metrics will be useful to compare repeatability of the same ROIs from different tracers as well as different ROIs of the same tracer. As seen in case of [11C]CUMI-101 and [11C]WAY-100635 for the serotonin 1A receptor; all things being equal, these repeatability metrics can help choose the tracer for a given target.
Random effects ANOVA is a useful model for PET brain imaging test-retest studies. The metrics that ensue from this model such as ICC, RC and WSVC are recommended to be reported along with the percent test-retest metric as they capture various sources of variability in the PET test-retest experiments in a succinct way.
Dierckx RAJO, de Vries EFJ, van Waarde A, den Boer JA. PET and SPECT in psychiatry. Berlin Heidelberg: Springer-Verlag; 2014.
Jones T, Rabiner EA. The development, past achievements, and future directions of brain PET. J Cereb Blood Flow Metab. 2012;32:1426–54.
Kuwabara H, Chamroonrat W, Mathews W, Waterhouse R, Brasic JR, Guevara MR, Kumar A, Hamill T, Mozley PD, Wong DF. Evaluation of 11C-ABP688 and 18FFPEB for imaging mGluR5 receptors in the human brain. J Nucl Med. 2011;52:390.
Raunig DL, McShane L, Pennello G, et al. Quantitative imaging biomarkers: a review of statistical methods for technical performance assessment. Stat Methods Med Res. 2014;24(1):27–67.
Barnhart HX, Haber MJ, Lin LI. An overview on assessing agreement with continuous measurements. J Biopharm Stat. 2007;17:529–69.
Lodge MA. Repeatability of SUV in oncologic 18F-FDG PET. J Nucl Med. 2017;58:523–32.
Crowley AL, Yow E, Barnhart HX, Daubert MA, Bigelow R, Sullivan DC, Pencina M, Douglas PS. Critical review of current approaches for echocardiographic reproducibility and reliability assessment in clinical research. J Am Soc Echocardiogr. 2016;29:1144–54. e1147
Milak MS, DeLorenzo C, Zanderigo F, Prabhakaran J, Kumar JS, Majo VJ, Mann JJ, Parsey RV. In vivo quantification of human serotonin 1A receptor using 11C-CUMI-101, an agonist PET radiotracer. J Nucl Med. 2010;51:1892–900.
Ogden RT, Ojha A, Erlandsson K, Oquendo MA, Mann JJ, Parsey RV. In vivo quantification of serotonin transporters using [(11)C]DASB and positron emission tomography in humans: modeling considerations. J Cereb Blood Flow Metab. 2007;27:205–17.
DeLorenzo C, Kumar JS, Zanderigo F, Mann JJ, Parsey RV. Modeling considerations for in vivo quantification of the dopamine transporter using [(11)C]PE2I and positron emission tomography. J Cereb Blood Flow Metab. 2009;29:1332–45.
Parsey RV, Slifstein M, Hwang DR, Abi-Dargham A, Simpson N, Mawlawi O, Guo NN, Van Heertum R, Mann JJ, Laruelle M. Validation and reproducibility of measurement of 5-HT1A receptor parameters with [carbonyl-11C]WAY-100635 in humans: comparison of arterial and reference tisssue input functions. J Cerebral Blood Flow Metab. 2000;20:1111–33.
DeLorenzo C, Kumar JS, Mann JJ, Parsey RV. In vivo variation in metabotropic glutamate receptor subtype 5 binding using positron emission tomography and [11C]ABP688. J Cereb Blood Flow Metab. 2011;31:2169–80.
Feng D: agRee: Various methods for measuring agreement. Available at http://cran.r-project.org/web/packages/agRee.
Quan H, Shih WJ. Assessing reproducibility by the within-subject coefficient of variation with random effects models. Biometrics. 1996;52(4):1194–203.
Barnhart HX, Barboriak DP. Applications of the repeatability of quantitative imaging biomarkers: a review of statistical analysis of repeat data sets. Transl Oncol. 2009;2:231–5.
Holcomb HH, Cascella NG, Medoff DR, Gastineau EA, Loats H, Thaker GK, Conley RR, Dannals RF, Wagner HN Jr, Tamminga CA. PET-FDG test-retest reliability during a visual discrimination task in schizophrenia. J Comput Assist Tomogr. 1993;17:704–9.
Seibyl JP, Laruelle M, van Dyck CH, Wallace E, Baldwin RM, Zoghbi S, Zea-Ponce Y, Neumeyer JL, Charney DS, Hoffer PB, Innis RB. Reproducibility of iodine-123-beta-CIT SPECT brain measurement of dopamine transporters. J Nucl Med. 1996;37:222–8.
Lopresti BJ, Klunk WE, Mathis CA, Hoge JA, Ziolko SK, Lu X, Meltzer CC, Schimmel K, Tsopelas ND, DeKosky ST, Price JC. Simplified quantification of Pittsburgh compound B amyloid imaging PET studies: a comparative analysis. J Nucl Med. 2005;46:1959–72.
Bland JM, Altman DG. Measuring agreement in method comparison studies. Stat Methods Med Res. 1999;8:135–60.
Innis RB, Cunningham VJ, Delforge J, Fujita M, Gjedde A, Gunn RN, Holden J, Houle S, Huang SC, Ichise M, Iida H, Ito H, Kimura Y, Koeppe RA, Knudsen GM, Knuuti J, Lammertsma AA, Laruelle M, Logan J, Maguire RP, Mintun MA, Morris ED, Parsey R, Price JC, Slifstein M, Sossi V, Suhara T, Votaw JR, Wong DF, Carson RE. Consensus nomenclature for in vivo imaging of reversibly binding radioligands. J Cereb Blood Flow. 2007;27:1533–9.
Gunn RN, Gunn SR, Cunningham VJ. Positron emission tomography compartmental models. J Cereb Blood Flow Metab. 2001;21:635–52.
Ogden RT. Estimation of kinetic parameters in graphical analysis of PET imaging data. Stat Med. 2003;22:3557–68.
Carrasco JL, Caceres A, Escaramis G, Jover L. Distinguishability and agreement with continuous data. Stat Med. 2014;33:117–28.
Kodaka F, Ito H, Kimura Y, Fujie S, Takano H, Fujiwara H, Sasaki T, Nakayama K, Halldin C, Farde L, Suhara T. Test-retest reproducibility of dopamine D2/3 receptor binding in human brain measured by PET with [11C]MNPA and [11C]raclopride. Eur J Nucl Med Mol Imaging. 2013;40:574–9.
Collste K, Forsberg A, Varrone A, Amini N, Aeinehband S, Yakushev I, Halldin C, Farde L, Cervenka S. Test-retest reproducibility of [(11)C]PBR28 binding to TSPO in healthy control subjects. Eur J Nucl Med Mol Imaging. 2016;43:173–83.
Ettrup A, Svarer C, McMahon B, da Cunha-Bang S, Lehel S, Moller K, Dyssegaard A, Ganz M, Beliveau V, Jorgensen LM, Gillings N, Knudsen GM. Serotonin 2A receptor agonist binding in the human brain with [(11)C]Cimbi-36: test-retest reproducibility and head-to-head comparison with the antagonist [(18)F]altanserin. NeuroImage. 2016;130:167–74.
Zou GY. Sample size formulas for estimating intraclass correlation coefficients with precision and assurance. Stat Med. 2012;31:3972–81.
Julious S. Tutorial in biostatistics. Sample sizes for clinical trials with normal data. Stat Med. 2004;23:1921–86.
Normandin MD, Zheng MQ, Lin KS, Mason NS, Lin SF, Ropchan J, Labaree D, Henry S, Williams WA, Carson RE, Neumeister A, Huang Y. Imaging the cannabinoid CB1 receptor in humans with [11C]OMAR: assessment of kinetic analysis methods, test-retest reproducibility, and gender differences. J Cereb Blood Flow Metab. 2015;35:1313–22.
Shoukri MM, Elkum N, Walter SD. Interval estimation and optimal design for the within subject coefficient of variation for continuous and binary variables. BMC Med Res Methodol. 2006;6:24.
This work was not supported by any grants or other funding sources.
Merck and Co., Inc., Kenilworth, NJ, USA
Richard Baumgartner
& Dai Feng
Novartis Institutes for Biomedical Research, Cambridge, USA
Aniket Joshi
Department of Psychiatry, Columbia University Medical Center, New York, NY, USA
Francesca Zanderigo
Molecular Imaging and Neuropathology Division, New York State Psychiatric Institute, New York, NY, USA
& R. Todd Ogden
Department of Biostatistics, Mailman School of Public Health, Columbia University, New York, NY, USA
R. Todd Ogden
Search for Richard Baumgartner in:
Search for Aniket Joshi in:
Search for Dai Feng in:
Search for Francesca Zanderigo in:
Search for R. Todd Ogden in:
RB, AJ, and TO conceived the study. RB, DF, TO, FZ, and AJ developed statistical analysis plan and drafted the manuscript. RB and DF performed the statistical analysis. RB and FZ coordinated the effort. All authors read and approved the final manuscript.
Correspondence to Richard Baumgartner.
This article does not contain any studies with human participants or animals performed by any of the authors.
Richard Baumgartner and Dai Feng are employees of Merck and Co., Inc. and own stock of Merck and Co., Inc. Aniket Joshi is employee of Novartis. Francesca Zanderigo and Todd Ogden declare that they have no conflict of interest.
Estimation of the parameters in test-retest experiment (estimators denoted by hat):
$$ {\displaystyle \begin{array}{l}{\widehat{\sigma}}_e^2=\mathrm{WSMSS}=\frac{1}{n}\sum \limits_{i=1}^n\sum \limits_{j=1}^2{\left({y}_{ij}-{\overline{y}}_i\right)}^2\\ {}\mathrm{BSMSS}=\frac{1}{n-1}\sum \limits_{i=1}^n{\left({\overline{y}}_i-\widehat{\mu}\right)}^2\\ {}\widehat{\mu}=\frac{1}{n}\sum \limits_{i=1}^n{\overline{y}}_i\\ {}{\widehat{\sigma}}_S^2=\frac{\left(n-1\right)\mathrm{BSMSS}-n\kern0.5em \mathrm{WSMSS}}{\left(n-1\right)\mathrm{BSMSS}+n\kern0.5em \mathrm{WSMSS}}\\ {}\\ {}\mathrm{R}\widehat{\mathrm{C}}=\sqrt{2}\kern0.5em {z}_{1-\alpha /2}{\widehat{\sigma}}_e\end{array}} $$
Where WSMSS and BSMSS are within and between mean sum of squares, respectively.
Also, confidence intervals for the estimates of ICC and WSCV are available (see [14] and [30], respectively).
For the RC under the one-way random ANOVA model, the confidence limits of the exact 100(1−α)% CI can be obtained as follows (r is number of repetitions, typically r = 2):
$$ {\displaystyle \begin{array}{l}\mathrm{R}\widehat{\mathrm{C}}{\mathrm{C}}_L={z}_{1-\alpha /2}\sqrt{2n\left(r-1\right)\mathrm{WSMSS}/{\chi}_{n\left(r-1\right)}^2\left(1-\alpha /2\right)}\\ {}\mathrm{R}\widehat{\mathrm{C}}{\mathrm{C}}_U={z}_{1-\alpha /2}\sqrt{2n\left(r-1\right)\mathrm{WSMSS}/{\chi}_{n\left(r-1\right)}^2\left(\alpha /2\right)}\\ {}{\mathrm{C}\mathrm{I}}_{\mathrm{WIDTH}}=\mathrm{R}\widehat{\mathrm{C}}{\mathrm{C}}_U-\mathrm{R}\widehat{\mathrm{C}}{\mathrm{C}}_L\end{array}} $$
where \( {\chi}_d^2\left(\alpha \right) \) is an α quantile of χ2 distribution with d degrees of freedom.
Mean normalized difference as a Taylor expansion-based approximation of log transformed differences.
Consider two real numbers y2 and y1 (e.g. they could represent two test-retest measurements):
Then, their difference (diff), mean normalized difference (mdiff), and log difference (ldiff) are defined as follows:
$$ {\displaystyle \begin{array}{l}\mathrm{diff}={y}_2-{y}_1\\ {}\mathrm{mdiff}=\left({y}_2-{y}_1\right)\frac{1}{\left({y}_2+{y}_1\right)/2}=\frac{2\mathrm{diff}}{\left({y}_2+{y}_1\right)}\\ {}\mathrm{ldif}f=\log {y}_2-\log {y}_1=\log \frac{y_2}{y_1}\end{array}} $$
Let R be defined as follows:
$$ R=1-\frac{y_2}{y{}_1} $$
Then expressing mdiff and ldiff in terms of R and expanding them as the Taylor series in terms of R we obtain the following:
$$ {\displaystyle \begin{array}{l}\mathrm{mdiff}=R-\frac{R^2}{2}+\frac{R^3}{4}-\frac{R^4}{8}+\dots {\left(-1\right)}^{i-1}\frac{R^i}{2^{i-1}}+\dots \\ {}\mathrm{ldiff}=R-\frac{R^2}{2}+\frac{\begin{array}{l}\\ {}{R}^3\end{array}}{3}-\frac{R^4}{4}+\dots {\left(-1\right)}^{i-1}\frac{R^i}{i}+\dots \\ {}\end{array}} $$
We observe that the first two terms of the Taylor expansion for mdiff and ldiff are identical, and they differ at the higher order terms greater than 2. Therefore, mdiff can be considered an approximation of ldiff.
|
CommonCrawl
|
\begin{document}
\title[Projective completions and quantum Lefschetz]{\
\\ Virtual cycles on projective completions and quantum Lefschetz formula
} \author[J. Oh]{Jeongseok Oh}
\begin{abstract} For a compact quasi-smooth derived scheme $M$ with $(-1)$-shifted cotangent bundle $N$, there are at least two ways to localise the virtual cycle of $N$ to $M$ via torus and cosection localisations, introduced by Jiang-Thomas \cite{JT}. We produce virtual cycles on both the projective completion $\bN:=\PP(N\oplus{\mathcal{O}}_M)$ and projectivisation $\PP(N)$ and show the ones on $\bN$ push down to Jiang-Thomas cycles and the one on $\PP(N)$ computes the difference.
Using similar ideas we give an expression for the difference of the quintic and $t$-twisted quintic GW invariants of Guo-Janda-Ruan \cite{GJR}. \end{abstract}
\maketitle
\setcounter{tocdepth}{1}
\section*{Introduction} Let $N$ be a quasi-projective scheme equipped with the symmetric obstruction theory $\phi: \EE_N \to \mathbb{L}_N$, where $\mathbb{L}_N$ is the (truncated) cotangent complex of $N$. Then $\phi$ defines the degree zero virtual cycle $$ [N]^{\vir}\ \in\ A_{0}\(N\). $$ When $N$ is compact, we obtain an invariant $\deg [N]^{\vir} \in \Z$\footnote{It is Donaldson-Thomas invariant when $N$ is a moduli space of stable sheaves on Calabi-Yau $3$-fold.}. Even if $N$ is not compact, invariants can be defined via localisations\footnote{By a localisation of a class $x \in A_*\(X\)$ to a closed subscheme $i: Y \into X$, we mean a class $y \in A_*\(Y\)$ such that $i_* y = x$.} so long as it is acted on by a torus with a compact fixed locus. We review some localisations studied by Jiang-Thomas \cite{JT} in Appendix \ref{app:A}.
For a compact quasi-smooth derived scheme $M$ with its associated $(-1)$-shifted cotangent bundle $N$\footnote{For instance when $M$ is a moduli space of stable sheaves on a surface $S$, $N$ is the moduli space of stable sheaves on the canonical bundle $K_S$.}, $\mathsf{T}:=\C^*$ acts on $N$ fiberwise so that $M$ becomes its fixed locus. As a (classical) scheme $N$ is roughly the dual of obstruction sheaf over $M$. In this case, Jiang-Thomas show there are really only $2$ different localisations \cite{JT} -- torus and cosection localisations.
Letting $\EE_M\to \mathbb{L}_M$ be the perfect obstruction theory of $M$ so that $N=\Spec\Sym(h^{1}(\EE^\vee_M))$, $t$ be the Euler class of the standard weight $1$ representation $\t$ of $\mathsf{T}$, and $\sigma: h^1(\EE^\vee_N)\cong\Omega_N\to{\mathcal{O}}_N$ be the cosection induced by the Euler vector field, these two cycles are\footnote{The equivariant cycle $[M]^{\vir}/e_{\mathsf{T}}\(\EE_{M}[-1]\ot\t\)$ is then a polynomial in $t$ by degree reason. Since it is of degree zero, the coefficient of $t^i$ should be a degree $i$ Chow class.} \beq{virs} [N]^{\vir}_{\mathsf{T}}\ :=\ \left\{\frac{[M]^{\vir}}{e_{\mathsf{T}}\(\EE_{M}[-1]\ot\t\)}\right\}_{t=0} \ \ \text{ and }\ \ [N]^{\vir}_{\sigma} \ \in\ A_0\(M\),
\eeq where the latter is the cosection localised cycle \cite{KL}. Our main result is the difference of the two is given by the reduced cycle of the projectivisation $\PP(N)$. The composition $\EE_N|_{N\take M}\rt{\phi}\mathbb{L}_{N\take M}\to\Omega_{(N\take M)/\PP(N)}$ defines a virtual rank $-1$ perfect obstruction theory of $\PP(N)$ $$
\EE_{\PP(N)}\ :=\ \Cone\(\,\EE_N|_{N\take M}\ \To\ \Omega_{(N\take M)/\PP(N)}\)[-1] $$ in $D_{\C^*}\(N\take M\)\cong D\(\PP(N)\)$. The residue map $\Omega_{\PP(N\oplus\; {\mathcal{O}}_M)}\(\log\PP(N)\)\(\PP(N)\)\to{\mathcal{O}}_{\PP(N)}\(\PP(N)\)$ factors through $$ \Omega_{\PP(N\oplus\; {\mathcal{O}}_M)}\(\log\PP(N)\)\(\PP(N)\)\ \To\ {\mathcal{O}}_{\PP(N\oplus\;{\mathcal{O}}_M)}\(\PP(N)\) $$ extending $\sigma:\Omega_N\to {\mathcal{O}}_N$, whose restriction
to $\PP(N)$ defines a surjective cosection\footnote{A way of thinking $\Omega_{\PP(N\oplus\; {\mathcal{O}}_M)}\(\log\PP(N)\)\(\PP(N)\)|_{\PP(N)}$ is isomorphic to the obstruction bundle $h^1\(\EE^\vee_{\PP(N)}\)\cong h^1\(\EE^\vee_N|_{N\take M}\)\cong h^0\(\EE_N|_{N\take M}\)$ is to see them as a nontrivial element in $\Ext^1\(N_{\PP(N)/\PP(N\oplus{\mathcal{O}}_M)},\Omega_{\PP(N)}\)\cong\Ext^1\(\Omega_{N\take M/\PP(N)},h^0\(\EE_{\PP(N)}\)\)$ which is $\C$.} $\EE_{\PP(N)}^\vee\to N_{\PP(N)/\PP(N\oplus{\mathcal{O}}_M)}[-1]$\footnote{The cocone of the cosection pretends to be a virtual rank zero dual perfect obstruction theory, but it is not really a dual perfect obstruction theory. Nevertheless, these are enough data to produce a degree zero virtual cycle.}. Following Kiem-Li \cite{KL}, these two define a degree zero reduced virtual cycle $[\PP(N)]^{\red}$, see Definition \ref{def:redcycle} in Section \ref{sect:Red}. \begin{Thm*}\label{main1}
The difference of the virtual cycles \eqref{virs} is $$ [N]^{\vir}_{\mathsf{T}} \ -\ [N]^{\vir}_{\sigma} \ = \ p_*[\PP\(N\)]^{\red} \ \in \ A_0\(M\), $$ where $p: \PP\(N\) \to M$ is the projection morphism. \end{Thm*}
We apply these ideas to the quintic and $t$-twisted quintic quasimap/GW invariants of Guo-Janda-Ruan \cite{GJR}\footnote{Felix Janda informed me that this is commonly called the ${\mathcal{O}}_{\PP^4}(5)$-twisted invariants. It is different from the formal invariants of Lho-Pandharipande \cite{LP}.} Although Theorem \ref{main1} does not apply directly in this case, very similar techniques compute the difference between these invariants.
We denote by $\iota:Q^{\;\varepsilon}_g(X,d)\into Q^{\;\varepsilon}_g(\PP^4,d)$ the moduli spaces of degree $d$, $\varepsilon$-stable quasimaps with genus $g$, no marked points to a smooth quintic $3$-fold $X$ and $\PP^4$, $X\subset \PP^4$ \cite{CKM}. Then the quasimap invariant for $X$ is defined to be the degree of the virtual cycle $$ [Q^{\;\varepsilon}_g(X,d)]^{\vir}\ \in\ A_0(Q^{\;\varepsilon}_g(X,d)). $$ When $\varepsilon>2$, the spaces are moduli of stable maps, so the degree defines Gromov-Witten invariant.
Consider the universal curve and map \beq{universal} \xymatrix@R=6mm{ C\ar[r]^-f\ar[d]_-\pi & \PP^4\\ Q^{\;\varepsilon}_g(\PP^4,d). } \eeq In $g=0$, $Q^{\;\varepsilon}_0(\PP^4,d)$ is smooth, $R\pi_*f^*{\mathcal{O}}(5)=\pi_*f^*{\mathcal{O}}(5)$ is a vector bundle, and the section $\pi_*f^*s$ of $\pi_*f^*{\mathcal{O}}(5)$ cuts out $Q^{\;\varepsilon}_0(X,d)\subset Q^{\;\varepsilon}_0(\PP^4,d)$, where $s\in \Gamma({\mathcal{O}}(5))$ is the defining section of $X=Z(s)\subset \PP^4$. It allows us to have the equivalence \cite{KKP} $$ \iota_*[Q^{\;\varepsilon}_0(X,d)]^{\vir}\= e(\pi_*f^*{\mathcal{O}}(5)) \cap [Q^{\;\varepsilon}_0(\PP^4,d)]. $$ This motivates considering the cycles in $g>0$, \beq{QLPclass}
\{e_{\mathsf{T}}\(R\pi_*f^*{\mathcal{O}}(5)\otimes\t^{-1}\)\cap [Q^{\;\varepsilon}_g(\PP^4,d)]^{\vir}\}_{t=0}\ \in\ A_0(Q^{\;\varepsilon}_g(\PP^4,d)) \eeq even if $Q^{\;\varepsilon}_g(\PP^4,d)$ may not be smooth and $R^1\pi_*f^*{\mathcal{O}}(5)$ need not vanish. Its degree is called the $t$-twisted invariant for $X$, so it is $t$-twisted Gromov-Witten invariant when $\varepsilon>2$.
\begin{Thm*}\label{thm2}
The reduced cycle of the projectivisation of $$ N\ :=\ \text{the moduli space of stable quasimap to $\PP^4$ with $p$-fields} $$ gives the difference up to sign \begin{align*} \{e_{\mathsf{T}}\(R\pi_*f^*{\mathcal{O}}(5)\ot\t^{-1}\) & \cap [Q^{\;\varepsilon}_g(\PP^4,d)]^{\vir}\}_{t=0} - \iota_*[Q^{\;\varepsilon}_g(X,d)]^{\vir} \\ &=(-1)^{5d+1-g}p_*[\PP(N)]^{\red}\ \in\ A_0\(Q^{\;\varepsilon}_g(\PP^4,d)\), \end{align*} where $p:\PP(N)\to Q^{\;\varepsilon}_g(\PP^4,d)$ is the projection morphism. \end{Thm*}
\begin{Rmk} We prove Theorems \ref{main1} and \ref{thm2} using the projective completion $\bN:=\PP(N\oplus {\mathcal{O}}_M)$. In \cite[Theorem 1.11=Theorem 3.21]{CJR}, Chen-Janda-Ruan proved a similar comparison result to Theorem \ref{thm2} using a different compactification of $N$. Recall the $p$-field space $N$ is the moduli space of stable objects of $(C,L,u,p)$, where \begin{align*} &\text{$C$ is a genus $g$ curve$\;$,}\ \text{ $L$ is a degree $d$ line bundle on $C$}, \\ &u\in \Gamma(C,L^{\oplus 5}),\ \ p\in\Gamma(C, L^{\otimes -5}\otimes\omega_C). \end{align*} Their compactification $\bN_{\mathrm{CJR}}$ considers $$ \Gamma\(C,\PP(L^{\otimes -5}\otimes \omega_C\oplus{\mathcal{O}}_C)\)\ \text{ instead of }\ \Gamma\(C, L^{\otimes -5}\otimes\omega_C\), $$ together with some extra structures and conditions. Then they constructed the canonical and reduced perfect obstruction theories of $\bN_{\mathrm{CJR}}$, where the latter gives the actual quintic theory and the former gives a different theory containing the $t$-twisted theory as a part of the torus localisation contributions. Their canonical perfect obstruction theory is equipped with the meromorphic cosection, inducing the regular cosection of the reduced perfect obstruction theory. It is surjective along the boundary, localising the reduced virtual cycle to the actual quintic cycle. However for us, we don't construct a reduced perfect obstruction theory on $\bN$, instead we use the meromorphic cosection of the canonical perfect obstruction theory to produce reduced virtual cycles in the sense of Kiem-Li \cite{KL}. \end{Rmk}
\subsection*{Notation}
For a morphism of schemes $f: X \to Y$ and a perfect complex ${\mathcal{E}}$ on $Y$, we often denote by ${\mathcal{E}}|_X$ the pullback $f^*{\mathcal{E}}$. We use ${\mathcal{E}}^*$ for the usual dual of ${\mathcal{E}}$, whereas ${\mathcal{E}}^\vee$ for the derived dual.
A vector bundle $E$ is sometimes thought of as its total space. For a morphism of vector bundles $f:E\to F$, $\ker f$ is sometimes thought of as the space $\Spec\(\Sym\(\coker f^*\)\)$.
\tableofcontents
\section{Quasi-smooth derived schemes and stable quasimaps} In this section, we review quasi-smooth derived schemes and moduli spaces of stable quasimaps. Though these two are not very close subjects, there are some common features to make the proofs of Theorem \ref{main1} and \ref{thm2} are equivalent. So after reviewing the two subjects, we summarise these in Section \ref{sect:SetUp} to establish the set-up for one proof.
\subsection*{Quasi-smooth derived scheme} Let $M$ be a quasi-projective scheme with the perfect obstruction theory \beq{EM} \EE_M=\{\,B^* \rt{d} A^*\,\}\ \To\ \mathbb{L}_M, \eeq and $N := \Spec_{{\mathcal{O}}_M} \Big( \Sym \(\text{coker} d^*\) \Big)$ be the abelian cone of the dual obstruction sheaf of $M$. By \cite[Lemma 2.1]{JT}, \beq{EMvee}
\EE_M^\vee|_N[1]=\{\,A|_N\rt{d^*}B|_N\,\}\ \To \ \mathbb{L}_{N/M} \eeq is a perfect obstruction theory of $N$ relative to $M$. The relative perfect obstruction theory \eqref{EMvee} is indeed obtained by a cut-out model \beq{localN} \xymatrix@=18pt{
& A^*|_{B^*} \ar[d] \\
N\ =\ (d\circ\tau_{B^*})^{-1}(0)\ \subset\hspace{-8mm} & B^*,\ar@/^{-2ex}/[u]_{d\circ \tau_{B^*}}}
\eeq where $\tau_{B^*}$ is the tautological section $\tau_{B^*} : {\mathcal{O}}_{B^*} \to B^*|_{B^*}$.
Suppose moreover that $M$ is a cut-out of a smooth scheme ${\mathcal{A}}$ by a section $s$ of $B$ so that \eqref{EM} is given by $d=(ds)^*|_M$. Here we regard $B$ as an extended bundle on ${\mathcal{A}}$. Then $N$ is a critical locus $N=Z(df)$ of the pairing $$
f\ :=\ \langle s|_{B^*}\, ,\, \tau_{B^*}\rangle\ \in\ \Gamma\({\mathcal{O}}_{B^*}\). $$ Hence we obtain a symmetric obstruction theory $\EE_N$ of $N$, \beq{EEN}
\EE_N\ :=\ \{\, T_{B^*} \ \rt{d^2f}\ \Omega_{B^*}\,\}|_{N}.
\eeq Together with \eqref{EM}, \eqref{EMvee} it fits in the exact triangle $\EE_M|_N \to \EE_N \to \EE^\vee_M|_N[1]$.
Though $M$ may not be obtained by a global cut-out model, $N$ is equipped with the canonical symmetric obstruction theory $\EE_N$ fit in the exact triangle if $M$ is (the classical truncation of) a quasi-smooth derived scheme -- because $N$ is (the classical truncation of) its $(-1)$-shifted cotangent bundle $T^\vee_M[-1]$. In this case local symmetric obstruction theories in \eqref{EEN} glue to $\EE_N$. In other words, the canonical symmetric lifting $\EE^\vee_M|_N \to \EE_M|_N$ of the morphism of cotangent complexes $\mathbb{L}_{N/M}[-1] \to \mathbb{L}_M|_N$ exists in the derived category, \beq{lift} \xymatrix@R=6mm{
\EE^\vee_M|_N \ar[r] \ar[d] & \EE_M|_N \ar[d]\\
\mathbb{L}_{N/M}[-1] \ar[r] & \mathbb{L}_M|_N. } \eeq $\EE_N$ is then its cone.
When $\mathsf{T}$ acts on $N$ fiberwise, $\EE^\vee_M|_N\otimes\t^{-1}[1] \to \mathbb{L}_{N/M}$ is the $\mathsf{T}$ relative perfect obstruction theory \eqref{EMvee}. The lifting \eqref{lift} becomes $\mathsf{T}$ invariant one $\EE^\vee_M|_N\otimes\t^{-1} \to \EE_M|_N$. Then the cone defines the $\mathsf{T}$ canonical symmetric obstruction theory, which is locally $$
\EE_N\ =\ \{\, T_{B^*}\otimes\t^{-1} \ \To\ \Omega_{B^*}\,\}|_{N}. $$
On the fixed locus $M$, $\EE_N|_M$ is decomposed into $\EE_M\oplus \EE^\vee_M[1]\ot\t^{-1}$. Hence $[N]^{\vir}_{\mathsf{T}}$ in \eqref{virs} is $\mathsf{T}$ localisation of $[N]^{\vir}$.
Importantly with the $\mathsf{T}$ action, the tautological section $\tau_{B^*}$ becomes Euler vector field $\tau_{B^*}:{\mathcal{O}}_{B^*}\to B^*|_{B^*}\otimes\t.$ Then by the (equivariant version of) cut-out model \eqref{localN} of $N$, we see its dual defines a cosection $\EE^\vee_M|_N\to{\mathcal{O}}_N[-1]\ot\t.$ Hence the composition with $\EE^\vee_N\to \EE^\vee_M|_N$ defines an equivariant cosection $$ \sigma\ :\ \EE^\vee_N\ \To\ {\mathcal{O}}_N[-1]\ot\t. $$
Note that $h^1(\sigma)$ is surjective where $\tau_{B^*}|_N$ is nonzero, which is complement of $M$, $N\take M$. The localisation of $[N]^{\vir}$ by $\sigma$ is then $[N]^{\vir}_{\sigma}$ in \eqref{virs}.
\subsection*{Stable quasimaps} Let $M:=Q^{\;\varepsilon}_g(\PP^4,d)$ be the moduli space of degree $d$, $\varepsilon$-stable quasimaps with genus $g$, no marked points to $\PP^4$ \cite{CKM}. The complex $\(R\pi_*f^*({\mathcal{O}}(1)^{\oplus 5})\)^\vee$ defined by using $\pi$ and $f$ in the universal family \eqref{universal} gives a relative perfect obstruction theory $\EE_{M/S}$ of $M$ over the moduli space of prestable curves of genus $g$, with line bundles of degree $d$, which we denote by $S$. Although we may not know if $M$ is projective, by \cite[Step 1 in Section 3.2.1]{KO}, $\EE_{M/S}$ has enough $2$-term, locally free representatives, i.e. for any $2$-term representative of coherent sheaves of $\EE_{M/S}$, there exists a $2$-term, locally free representative and a chain map from it to given representative. We pick one such $$ \EE_{M/S}\=\{\;B'^*\ \To\ A'^*\;\}. $$
The $p$-field space $N$ is defined to be $N:=\Spec \Sym h^{1}\(R\pi_*f^*{\mathcal{O}}(5)\)$ \cite{CL12, FJR, CFGKS}. Again by \cite[Lemma 2.1]{JT}, $R\pi_*f^*{\mathcal{O}}(5)|_N[1]$ becomes a relative perfect obstruction theory $\EE_{N/M}$. By \cite[Step 1 in Section 3.2.1]{KO}, $R\pi_*f^*{\mathcal{O}}(5)[1]$ has enough $2$-term, locally free representatives. We pick one $A\to B$, giving $$
\EE_{N/M}\=\{\;A|_N\ \rt{d^*}\ B|_N\;\}. $$ With this notation, $N$ has the same (relative) cut-out model \eqref{localN}.
Note that by its construction in \cite{CL12, FJR, CFGKS}, $(N,\EE_{N/M})$ is actually a pullback space by $M\to S$ so that $\EE_{M/S}|_N\oplus\EE_{N/M}$ defines a relative perfect obstruction theory $\EE_{N/S}$. Then the cone defines a virtual rank zero perfect obstruction theory of $N$, $$
\EE_N\ :=\ \Cone\(\;\EE_{M/S}|_N\oplus\EE_{N/M}\ \To\ \mathbb{L}_S|_N\). $$
Here the restriction of the cotangent complex $\mathbb{L}_S|_N$ is a bundle on $N$ since $S$ is a smooth Artin stack \cite{CKM} and $N$ is a Deligne-Mumford stack.
Considering the fiberwise $\mathsf{T}$ action, the complex $\EE_{N/M}$ becomes an equivariant complex $R\pi_*f^*{\mathcal{O}}(5)|_N[1]\ot\t^{-1}$ with nonzero weights, and hence $\mathsf{T}$-localised cycle is \begin{align}\label{TloCalN} [N]_{\mathsf{T}}^{\vir}&\=\left\{\frac{[M]^{\vir}}{e_{\mathsf{T}}\((R\pi_*f^*{\mathcal{O}}(5))^\vee[-1]\ot\t\)}\right\}_{t=0}\nonumber\\ &\=(-1)^{5d+1-g}\{e_{\mathsf{T}}\(R\pi_*f^*{\mathcal{O}}(5)\ot\t^{-1}\)\cap [M]^{\vir}\}_{t=0}\ \in\ A_0\(M\). \end{align}
Chang-Li \cite{CL12} firstly introduced a cosection of $N$ when $\varepsilon>2$ whose degeneracy locus is $Q^{\;\varepsilon}_g(X,d)$, showing that the cosection localised invariant of $N$ is equivalent to GW invariant of the quintic $3$-fold $X$ up to sign. By several works \cite{KO, CL20, CJW, Pi} after this, it has been studied the cosection localised cycle of $[N]^{\vir}$ is equal to $(-1)^{5d+1-g}[Q^{\;\varepsilon}_g(X,d)]^{\vir}$ for any $\varepsilon$. This cosection is a sum of two pieces defined on each direct summand of $\EE^\vee_{N/S}$. Here, we are interested in the piece on $\EE^\vee_{M/S}|_N$. The below is a construction.
In \cite[Step 2 in Section 3.2.1]{KO}, a chain map representative between $\EE^\vee_{M/S}[1]=R\pi_*f^*({\mathcal{O}}(1)^{\oplus 5})$ and $\EE_{N/M}|_M=R\pi_*f^*{\mathcal{O}}(5)$ induced by the defining equation of $X$ is constructed, \beq{ChainABAB} \xymatrix@R=6mm@C=6mm{ \EE^\vee_{M/S}[1]=R\pi_*f^*({\mathcal{O}}(1)^{\oplus 5}) \ar[d] &\hspace{-15mm} = \hspace{-20mm}& \{ \hspace{-12mm}& A'\ar[r]\ar[d] &B'\ar[d] & \hspace{-13mm}\} \\ R\pi_*f^*{\mathcal{O}}(5) &\hspace{-15mm} = \hspace{-20mm}& \{ \hspace{-12mm}& A\ar[r] & B& \hspace{-12mm}\} }
\eeq by choosing suitable representatives $A,B,A',B'$. Composing with $B'|_{B^*}\to B|_{B^*}$ defined above in \eqref{ChainABAB}, the dual tautological section $\tau_{B^*}:{\mathcal{O}}_{B^*}\to B^*|_{B^*}\ot\t$ defines a homomorphism $B'|_{B^*}\to {\mathcal{O}}_{B^*}\ot\t$. From the cut-out model \eqref{localN} of $N$, it defines a cosection $\EE^\vee_{M/S}|_N\to {\mathcal{O}}_{N}[-1]\ot\t$ on $N$. Since the composition $\mathbb{L}^\vee_S|_N[-1]\to \EE^\vee_{M/S}|_N\to {\mathcal{O}}_{N}[-1]\ot\t$ is zero by \cite[Equation (3.16)]{KO}, we obtain a cosection $$ \sigma\ :\ \EE^\vee_N\ \To\ {\mathcal{O}}_N[-1]\ot\t. $$ Then its degeneracy locus is $M$ \cite[Equation (3.8)]{KO}, and hence by \cite{KO, CL20, CJW, Pi} $\sigma$-localised cycle is \beq{CoSecN} [N]_{\sigma}^{\vir}\=(-1)^{5d+1-g}\iota_*[Q^{\;\varepsilon}_g(X,d)]^{\vir}\ \in\ A_0\(M\). \eeq By \eqref{TloCalN}, \eqref{CoSecN}, the statements of Theorem \ref{main1} and \ref{thm2} became equivalent now.
\subsection{Set-up}\label{sect:SetUp} Let $M$ be a finite type, separated Deligne-Mumford stack over a smooth Artin stack $S$ with the relative perfect obstruction theory $\EE_{M/S}$. Suppose it has enough $2$-term, locally free representatives so that we can pick one such $$ \EE_{M/S}\ :=\ \{B'^*\ \To\ A'^*\}. $$
For a morphism of vector bundles $d:B^*\to A^*$ on $M$, we define $N:=\ker d$. Then by \cite[Lemma 2.1]{JT}, the relative perfect obstruction theory is \beq{setupEEN}
\EE_{N/M}\ =\ \{A|_N\ \rt{d^*}\ B|_N\}. \eeq So assume that $\EE_{N/M}$ has enough $2$-term, locally free representatives.
An important assumption is that there exists a lift $\EE_{N/M}[-1]\to \EE_{M/S}|_N$ of $\mathbb{L}_{N/M}[-1]\to \mathbb{L}_{M/S}|_N$ so that its cone defines a perfect obstruction theory $\EE_{N/S}$. Moreover we assume that $\dim S+\rk\EE_{N/S}=0$ so that the perfect obstruction theory of $N$ $$
\EE_N\ :=\ \Cone\(\EE_{N/S}[-1]\ \To\ \mathbb{L}_S|_N\) $$ defines a degree zero virtual cycle $[N]^{\vir}\in A_0(N)$.
The fiberwise $\mathsf{T}$ action on $N$ localises the virtual cycle $[N]^{\vir}$ via torus localisation \cite{GP}, $$
[N]^{\vir}_{\mathsf{T}}\ :=\ \left\{\frac{[M]^{\vir}}{e_{\mathsf{T}}\(\EE^\vee_{N/M}|_M\)}\right\}_{t=0}\ \in\ A_0\(M\). $$
Assume that there exists a chain map between $\EE^\vee_{M/S}[1]$ and $\EE_{N/M}|_M$, \beq{ChainAB} \xymatrix@R=6mm@C=6mm{ \EE^\vee_{M/S}[1] \ar[d] &\hspace{-15mm} = \hspace{-20mm}& \{ \hspace{-12mm}& A'\ar[r]\ar[d] &B'\ar[d] & \hspace{-13mm}\} \\
(\EE_{N/M}|_M)|_{\t=1} &\hspace{-15mm} = \hspace{-20mm}& \{ \hspace{-12mm}& A\ar[r] & B& \hspace{-12mm}\} }
\eeq for suitable choices of $A,B,A',B'$. With $B'\to B$ above in \eqref{ChainAB}, the tautological section $\tau_{B^*}:{\mathcal{O}}_{B^*}\to B^*|_{B^*}\ot\t$ defines a cosection $\EE^\vee_{M/S}|_N\to {\mathcal{O}}_N[-1]\ot\t$ on $N$, and hence it defines a cosection $\EE^\vee_{N/S}\to {\mathcal{O}}_N[-1]\ot\t$. We assume further that the composition $\mathbb{L}^\vee_S|_N[-1] \to \EE^\vee_{N/S}\to {\mathcal{O}}_N[-1]\ot\t$ is zero so that the cosection $$ \sigma\ :\ \EE^\vee_N\ \To\ {\mathcal{O}}_N[-1]\ot\t $$ is defined. Assuming the degeneracy locus is contained in $M$, we get $\sigma$-localised cycle $[N]^{\vir}_{\sigma}\in A_0(M)$ of $[N]^{\vir}$.
Note that we take $S=\Spec \C$ and $\eqref{ChainAB}={\operatorname{id}}$ for a quasi-smooth derived scheme $M$ to be in the set-up. In the following Sections, we assume $N\neq M$ since Theorems \ref{main1} and \ref{thm2} are obvious in this case.
\section{Virtual cycle on the projective completion} We start with the set-up in Section \ref{sect:SetUp}. On the projective completion of $N$ with the $\mathsf{T}$ action $$ \overline{N}\ :=\ \PP\(N \otimes\t \oplus {\mathcal{O}}_M\), $$ we would like to define a $\mathsf{T}$ perfect obstruction theory, extending $\EE_N$. Then by virtual localisation \cite{GP} we localise the virtual cycle to the fixed locus $$ \bN^\mathsf{T} \ \cong\ M\ \cup\ D $$ where $M \into \bN$ is the zero section and $D := \PP\(N\)\into \bN$ is the infinity divisor. We prove its contribution lying on $M$ is $[N]^{\vir}_{\mathsf{T}}$ and that on $D$ is zero so that the pushdown of the virtual cycle to $M$ is $[N]^{\vir}_{\mathsf{T}}$.
\subsection{Extended perfect obstruction theory}\label{sect:LPOT} To extend the perfect obstruction theory $\EE_{N/S}$ to $\bN$, we consider the quotient expression of $\bN$ -- it is a (GIT) quotient of $N \times \C$ by $\C^*$. Then we use the perfect obstruction theory of $N\times\C$ \beq{obstimesC} \Phi \ :\ \EE_{N \times \C/S} \ :=\ \EE_{N/S} \boxplus {\mathcal{O}}_{\C} \ \To\ \mathbb{L}_{N \times \C/S} \ \cong\ \mathbb{L}_{N/S} \boxplus {\mathcal{O}}_{\C}. \eeq
Let $\mathsf{T}':=\C^*$ be a torus, different from $\mathsf{T}$. We denote by $\t'$ the standard weight $1$ representation of $\mathsf{T}'$. Then we consider a $\mathsf{T} \times \mathsf{T}'$ action on $N \times \C$ to be $\(N \otimes\t \otimes\t'\) \oplus \({\mathcal{O}}_M \otimes\t'\).$ Then $\bN$ is the quotient of $N \times \C$ by $\mathsf{T}'$ after removing the zero section $$ \bN \ \cong\ \wt{N}/\,\mathsf{T}', \ \ \wt{N}:=\(N\times\C\)\take M. $$ We denote by $q: \wt{N} \to \bN$ the quotient morphism. Sometimes thinking of $\wt{N}$ as the punctured tautological bundle $\wt{N}\cong{\mathcal{O}}_{\bN}(-D)\take \bN$ will be useful.
By abuse of notation, we regard the perfect obstruction theory $\Phi$ \eqref{obstimesC} on $N \times \C$ as a $\mathsf{T} \times \mathsf{T}'$ equivariant one. Then the restriction $$
\EE_{\wt{N}/S}\ :=\ \EE_{N \times \C/S}|_{\wt{N}} $$ defines a perfect obstruction theory since $\wt{N} \into N\times\C$ is open.
As how we get $\bN$ as a quotient of $\wt{N}$, we would like to define {\em the quotient of $\EE_{\wt{N}/S}$}, providing a perfect obstruction theory of $\bN$ over $S$. To see where it sits on, we consider the exact triangle of cotangent complexes $$ q^*\mathbb{L}_{\bN/S}\ \To\ \mathbb{L}_{\wt{N}/S} \ \To\ \Omega_q $$ induced by a smooth morphism $q$. We observe from this that the place where $\EE$ is in the diagram below \eqref{twotriangle} of triangles \beq{twotriangle} \xymatrix@R=5mm@C=13mm{ \EE\ar[d]\ar[r]&\EE_{\wt{N}/S} \ar[d]_-{\Phi} \ar[r] &\Omega_q \ar@{=}[d]\\ q^*\mathbb{L}_{\bN/S}\ar[r]&\mathbb{L}_{\wt{N}/S} \ar[r] & \Omega_q. } \eeq should be for the pullback perfect obstruction theory of $\bN$ by $q$. Using the equivalence of the bounded derived categories $q^*:D\(\bN\)\to D_{\mathsf{T}'}\(\wt{N}\)$ \cite[Proposition 2.2.5]{BL}, we define the quotient.
\begin{Def} \label{def:twotriangle} We define {\em the quotient of $\EE_{\wt{N}/S}$} to be $$ \EE_{\bN/S}\ :=\ (q^*)^{-1}\EE\=(q^*)^{-1}\Cone\(\EE_{\wt{N}/S}\to\Omega_q\)[-1]. $$ \end{Def}
Again using the equivalence $q^*:D\(\bN\)\to D_{\mathsf{T}'}\(\wt{N}\)$, we obtain a morphism \beq{EElog} \EE_{\bN/S}\ \To\ \mathbb{L}_{\bN/S} \ \in\ D\(\bN\) \eeq whose pullback by $q^*$ recovers $\EE\to q^*\mathbb{L}_{\bN/S}$.
\begin{Prop} \label{prop:EEbN}
\eqref{EElog} is a perfect obstruction theory of $\bN$ over $S$, extending $\EE_{N/S}$, i.e. $\EE_{\bN/S}|_N\cong\EE_{N/S}$. \end{Prop} \begin{proof} Let us prove $\EE$ is a perfect complex of amplitude $[-1,0]$ first. Since $h^i\(\EE_{\wt{N}/S}\)=0$ for $i\neq -1,0$, we have $h^i\(\EE\)=0$ for $i\neq -1,0,1$. The induced morphism $h^0(\mathbb{L}_{\wt{N}/S})\to\Omega_q$ is onto, showing $h^1(\EE)=0$. Hence $\EE$ is a perfect complex of amplitude $[-1,0]$. Next, by comparing the long exact sequences induced by each row of \eqref{twotriangle}, we can check that $h^0(\EE)\to h^0(q^*\mathbb{L}_{\bN/S})$ is an isomorphism, and $h^{-1}(\EE)\to h^{-1}(q^*\mathbb{L}_{\bN/S})$ is onto.
Since $q$ is smooth, $\EE_{\bN/S}$ is also a perfect complex of amplitude $[-1,0]$, $h^0(\EE_{\bN/S})\to h^0(\mathbb{L}_{\bN/S})$ is an isomorphism, and $h^{-1}(\EE_{\bN/S})\to h^{-1}(\mathbb{L}_{\bN/S})$ is onto. Hence \eqref{EElog} is a perfect obstruction theory of $\bN$ over $S$.
Since the morphism $q$ on $q^{-1}(N)=N\times\C^*\subset \wt{N}$ is the projection to $N$, we have $\EE|_{q^{-1}(N)}\cong\EE_{N/S}|_{q^{-1}(N)}$ by the construction of $\EE$ in \eqref{twotriangle}. So \begin{align*}
\EE_{\bN/S}|_N\,\cong\,\((q^*)^{-1}\EE\)|_{N}\,&\cong\,(q^*)^{-1}\(\EE|_{q^{-1}(N)}\)\\
\,& \cong\,(q^*)^{-1}\(\EE_{N/S}|_{q^{-1}(N)}\)\,\cong\,\EE_{N/S}, \end{align*} proving $\EE_{\bN/S}$ is an extension of $\EE_{N/S}$. \end{proof}
\subsection{Relative perfect obstruction theory}
As how we define $\EE_{\bN/S}$, a relative perfect obstruction theory $\EE_{\bN/M}$ of $\bN$ over $M$ can be defined as the quotient of $(\EE_{N/M}\boxplus{\mathcal{O}}_{\C})|_{\wt{N}}$ in Definition \ref{def:twotriangle}. This is an extension of the relative perfect obstruction theory \eqref{setupEEN} with the $\mathsf{T}$ action \beq{TsetupEEN}
\EE_{N/M}\,=\,\{A|_N\ot\t^{-1} \rt{d^*} B|_N\ot\t^{-1}\}. \eeq In this section we provide a cut-out expression of $\bN$ defining $\EE_{\bN/M}$.
Let $\bcB$ denote the projective completion $\PP\((B^*\otimes\t) \oplus {\mathcal{O}}_M\)$, which is smooth over $M$. Then $\bN$ is a cut-out of $\bcB$ as in the following extended picture of \eqref{localN}, \beq{locbN} \xymatrix@=18pt{
& A^*|_{\bcB}({\mathcal{D}}) \otimes\t \ar[d] \\
\bN\ =\ (d\circ\overline{\tau})^{-1}(0)\ \subset\hspace{-12mm} & \bcB,\ar@/^{-2ex}/[u]_{d\circ \overline{\tau}}} \eeq where ${\mathcal{D}}:= \PP\(B^*\)\into \bcB$ is the infinity divisor. The section $\overline{\tau}$ is given by the tautological line bundle \beq{EulerbcBM}
0\, \to\, {\mathcal{O}}_{\bcB}(-{\mathcal{D}}) \ \rt{\overline{\tau}\, \oplus\, s_{{\mathcal{D}}} }\ \(B^*|_{\bcB} \otimes\t\) \oplus {\mathcal{O}}_{\bcB}\, \to\, T_{\bcB/M}(-{\mathcal{D}}) \, \to\, 0. \eeq
\begin{Lemma}\label{Lem:Er} The relative perfect obstruction theory $\EE_{\bN/M}$ is represented by \beq{relEElog}
\EE_{\bN/M} \ \cong\ \{\,A|_{\bcB}(-{\mathcal{D}}) \otimes\t^{-1} \ \rt{d(d\circ \overline{\tau})^*}\ \Omega_{\bcB/M} \,\}|_{\bN}. \eeq \end{Lemma} \begin{proof} We use the identifications of the tangent bundle of $q$ \beq{Tq} T_q\ \cong\ {\mathcal{O}}_{\wt{N}}\
\cong\ {\mathcal{O}}_{\bN}(-D)|_{\wt{N}}\otimes\t', \eeq obtained by the Euler sequence of $\PP\({\mathcal{O}}_{\bN}(-D)\oplus{\mathcal{O}}_{\bN}\)$. Here, we consider $\wt{N}$ as the punctured tautological bundle ${\mathcal{O}}_{\bN}(-D)\take \bN$.
By Definition \ref{def:twotriangle}, the pullback $q^*\EE_{\bN/M}$ is the cocone of \beq{this} \EE_{N/M}\ot\t'^{-1}\boxplus{\mathcal{O}}_{\C}\ot\t'^{-1} \ \To\ \Omega_q. \eeq Using the global resolution of $\EE_{N/M}$ \eqref{TsetupEEN} and \eqref{Tq}, we observe that the above morphism \eqref{this} is represented by a chain map \beq{chainAB} \xymatrix@R=5mm{
B|_{\wt{\mathcal{B}}}\ot(\t\ot\t')^{-1} \oplus {\mathcal{O}}_{\wt{\mathcal{B}}}\ot\t'^{-1} \ar[r] & {\mathcal{O}}_{\wt{\mathcal{B}}} \\
A|_{\wt{\mathcal{B}}}\ot(\t\ot\t')^{-1}. \ar[u]^-{d^*\oplus\,0} } \eeq on $\wt{N}\subset \wt{\mathcal{B}}:=(B^*\times\C)\take M$. Note that the composition $$
A|_{\wt{\mathcal{B}}}\ot(\t\ot\t')^{-1}\ \rt{d^*}\ B|_{\wt{\mathcal{B}}}\ot(\t\ot\t')^{-1}\ot(\t\ot\t')^{-1}\ \To\ {\mathcal{O}}_{\wt{\mathcal{B}}} $$
is zero on $\wt{N}$. The horizontal morphism of \eqref{chainAB} is a part of the dual Euler sequence $\(\eqref{EulerbcBM}\otimes\, {\mathcal{O}}_{\bcB}({\mathcal{D}})\)|_{\wt{\mathcal{B}}}$. So its kernel is $\Omega_{\bcB/M}|_{\wt{\mathcal{B}}}$. The bundle at the bottom of \eqref{chainAB} is $\(A|_{\bcB}(-{\mathcal{D}})\)|_{\wt{\mathcal{B}}}\ot\t^{-1}$ by \eqref{Tq}, which induces ${\mathcal{O}}_{\bcB}({\mathcal{D}})|_{\wt{\mathcal{B}}}=\t'$. So $(q^*)^{-1}\(\eqref{chainAB}|_{\wt{N}}\)$ gives the representative \eqref{relEElog}. \end{proof}
\subsection{Virtual cycle} Now we define a perfect obstruction theory of $\bN$ to be $$
\EE_{\bN}\ :=\ \Cone\(\EE_{\bN/S}[-1]\ \To\ \mathbb{L}_S|_{\bN}\), $$ which produces a degree zero virtual cycle $$ [\bN]^{\vir} \ \in\ A_0\(\bN\). $$ By Proposition \ref{prop:EEbN}, $\EE_{\bN}$ is an extension of $\EE_N$.
\begin{Prop} \label{Prop:log} The virtual cycle $[\bN]^{\vir}$ pushes down to $[N]^{\vir}_\mathsf{T}$ defined in \eqref{virs} by the projection morphism $\pi: \bN \to M$, $$ [N]^{\vir}_\mathsf{T} \ =\ \pi_*[\bN]^{\vir} \ \in\ A_0\(M\). $$ \end{Prop} \begin{proof} We prove it via $\mathsf{T}$ virtual localisation \cite{GP}. To apply it to $[\bN]^{\vir}$, we need to investigate the fixed and moving parts of the perfect obstruction theory $\EE_{\bN}$ on the fixed locus $\bN^\mathsf{T}=M\cup D$. To see this, we use a triangle \beq{tritritri}
\EE_{M/S}|_{\bN}\ \To\ \EE_{\bN/S}\ \To\ \EE_{\bN/M}, \eeq induced by a diagram of triangles we have constructed so far $$ \xymatrix@R=5mm@C=7mm{
q^*\EE_{M/S}|_{\bN}\ar@{=}[r]\ar@{-->}[d]& \EE_{M/S}|_{\wt{N}}\ar[d]\\ q^*\EE_{\bN/S}\ar[r]\ar@{-->}[d]&\EE_{\wt{N}/S} \ar[d] \ar[r] &\Omega_q \ar@{=}[d]\\
q^*\EE_{\bN/M}\ar[r]&\EE_{N/M}\boxplus{\mathcal{O}}_{\C}|_{\wt{N}} \ar[r] & \Omega_q. } $$ Note that the middle and bottom horizontals are coming from the definitions of $\EE_{\bN/S}$ and $\EE_{\bN/M}$ Definition \ref{def:twotriangle}. The mid-vertical is obtained by the triangle $$
\EE_{M/S}|_N\ \To\ \EE_{N/S}\ \To\ \EE_{N/M}. $$
Then we get the left vertical, giving \eqref{tritritri}. It tells us that the moving part of $\EE_{\bN}|_{\bN^{\mathsf{T}}}$ is isomorphic to that of $\EE_{\bN/M}|_{\bN^{\mathsf{T}}}$.
On $M$, we have $\EE_{\bN/M}|_M\cong \EE_{N/M}|_M$ which is the moving part. So the contribution of $\mathsf{T}$ localisation of $[\bN]^{\vir}$ on $M$ is $[N]^{\vir}_{\mathsf{T}}$.
It remains to show that the contribution on $D$ is zero. We use the representative of $\EE_{\bN/M}|_D$ obtained by Lemma \ref{Lem:Er} $$
\EE_{\bN/M}|_D \ \cong\ \{\,A|_{D}(-{\mathcal{D}}) \otimes\t^{-1} \ \rt{d(d\circ \overline{\tau})^*}\ \Omega_{\bcB/M}|_D \,\}|_{\bN}. $$
The bundle $A|_{D}(-{\mathcal{D}}) \otimes\t^{-1}$ is fixed since the tautological line bundle ${\mathcal{O}}_{D}(-{\mathcal{D}})$ is contained in $B^*|_{D}\ot\t$. Whereas $\Omega_{\bcB/M}|_D$ contains a conormal bundle $N^*_{D/\bN}$ which is not fixed. Hence the moving part of $\EE_{\bN/M}|_D$ has virtual rank $1$, which implies the fixed part of $\EE_{\bN}|_D$ has virtual rank $-1$. So the contribution on $D$ is zero. \end{proof}
\section{Reduced cycle on the projectivisation} In this section we find an extension $$ \bsigma\ :\ \EE_{\bN}^\vee \ \To\ {\mathcal{O}}_{\bN}(D)[-1] \otimes\t, $$ of the cosection $\sigma: \EE_N^\vee\to {\mathcal{O}}_N[-1]\ot\t$. Assuming $h^1(\bsigma)$ is surjective on $D\subset \bN$ it defines the reduced cycle $[\PP(N)]^{\red}$. Then we prove $\bsigma$-localised cycle of $[\bN]^{\vir}$ is a sum $[N]^{\vir}_{\sigma}+[\PP(N)]^{\red}\in A_0(M\cup D)$.
\subsection{Extended twisted cosection} The Euler sequence on $\bcB$ induces a homomorphism $$
B|_{\bcB}\ \rt{\btau}\ {\mathcal{O}}_{\bcB}({\mathcal{D}})\ot\t, $$
which is surjective on ${\mathcal{D}}$, extending the dual tautological section $\tau_{B^*}$. Using the cut-out model of $\bN\subset \bcB$ \eqref{locbN} we see the composition with $B'|_{\bcB}\to B|_{\bcB}$ in \eqref{ChainAB} defines a cosection on $\bN$ $$
\EE^\vee_{M/S}|_{\bN}\ \To\ {\mathcal{O}}_{\bN}(D)[-1]\otimes\t. $$
The composition $\mathbb{L}_S^\vee[-1]|_{\bN}\to \EE^\vee_{\bN/S}\to \EE^\vee_{M/S}|_{\bN}\to {\mathcal{O}}_{\bN}(D)[-1]\otimes\t$ is zero because it is zero on $N$ by the assumption in Section \ref{sect:SetUp}. Hence it defines a cosection $$ \bsigma\ :\ \EE^\vee_{\bN}\ \To\ {\mathcal{O}}_{\bN}(D)[-1]\otimes\t. $$ Obviously it is an extension of $\sigma: \EE^\vee_N\to {\mathcal{O}}_N[-1]\ot\t$.
\subsection{Reduced cycle}\label{sect:Red}
$\PP(N)=D$ is equipped with the perfect obstruction theory $\EE_{D}:=\EE_{\bN}|_D^{\mathrm{fix}}$ (of virtual rank $-1$) and the morphism $\bsigma_D:=\bsigma|_{\EE^\vee_D}:\EE^\vee_D\to {\mathcal{O}}_D(D)[-1]\ot\t$. Note that $\EE_D$ is described as a quotient $$
\EE_{D}\ \cong\ \Cone\(\,\EE_N|_{N\take M}\ \To\ \Omega_{(N\take M)/D}\)[-1] $$ through the equivalence $D_{\C^*}\(N\take M\)\cong D\(D\)$. Now we assume the surjectivity of $h^1(\bsigma)$ on $D$, inducing the surjectivity of $h^1(\bsigma_D)$. In this case, Kiem-Li proved the cone reduction property \cite[Corollary 4.5]{KL} \beq{INCLUSION} {\mathfrak{C}}_D\ \subset\ h^1/h^0\(\Cone(\bsigma_D)\), \eeq where ${\mathfrak{C}}_D$ is the intrinsic normal cone of $D$ and $h^1/h^0$ denotes the cone stack associated to a complex defined in \cite{BF}. The surjectivity of $h^1(\bsigma_D)$ allows $h^1/h^0(\Cone(\bsigma_D))$ to be actually a bundle stack so that the Gysin map $$ 0^!_{h^1/h^0(\Cone(\bsigma_D))}\ :\ A_{0}\Big(h^1/h^0\(\Cone(\bsigma_D)\)\Big)\ \To\ A_{0}\(D\) $$ is defined \cite{Kr}. \begin{Def}\label{def:redcycle} {\em The reduced cycle} is defined to be $$ [\PP(N)]^{\red}\ :=\ 0^!_{h^1/h^0(\Cone(\bsigma_D))}[{\mathfrak{C}}_D] \ \in\ A_{0}\(D\). $$ \end{Def}
Note that the inclusion \eqref{INCLUSION} may not be a scheme theoretic embedding, but a set-theoretic embedding. Hence $\Cone(\bsigma_D)^\vee$ may not be a perfect obstruction theory in general.
\begin{Thm} \label{main2} We obtain the following comparison result $$ [N]^{\vir}_{\mathsf{T}}\ -\ [N]^{\vir}_{\sigma}\=p_*[\PP(N)]^{\red}\ \in\ A_0(M), $$ where $p:\PP(N)\to M$ is the projection morphism. \end{Thm} \begin{proof} Proposition \ref{Prop:log} implies $$ [N]^{\vir}_{\mathsf{T}}\=\pi_*[\bN]^{\vir}\ \in\ A_0(M). $$ So it is enough to show that $[N]^{\vir}_{\sigma}+[\PP(N)]^{\red}$ is a localisation of $[\bN]^{\vir}$.
Take any global representative $$ \EE_{\bN}\=\{\;E^*\ \To\ T^*\}. $$ Then the Behrend-Fantechi cone $C_{\bN}$ (of $\dim=\rk T=\rk E$) lies in $E$ and the virtual cycle is its intersection with the zero section \beq{FulDef} [\bN]^{\vir}\=0^!_E[C_{\bN}]. \eeq The twisted cosection $\bsigma$ induces a homomorphism $$ \bsigma\ :\ E\ \To\ {\mathcal{O}}_{\bN}(D), $$ which we denote also by $\bsigma$ by abuse of notation. It is onto outside of $M\into \bN$. Hence on the blowup $b:Bl:=Bl_M\bN\to \bN$ with the exceptional divisor $D_0$, $\bsigma$ induces a surjection $$
\bsigma_{Bl}\ :\E|_{Bl}\ \twoheadrightarrow\ {\mathcal{O}}_{Bl}\(D-D_0\). $$
Let $F:=\ker\(\bsigma_{Bl}\)$. Then the disjoint union $E|_M \cup F$ surjects to the kernel of $\bsigma$, $$
E|_M\, \cup\, F\ \twoheadrightarrow\ \ker\(\bsigma\)\ \subset E. $$ Since this induces a surjection of Chow groups, the cone reduction property $C_{\bN}\subset\ker\(\bsigma\)$ for the cosection $\bsigma$ \cite[Corollary 4.5]{KL} tells us that there exist cycles \begin{align}\label{CN12}
&[C_1]\ \in\ A_{\rk E}\(E|_M\), \ \ [C_2]\ \in\ A_{\rk E}\(F\) \text{ such that} \nonumber\\ &[C_{\bN}]\=[C_1]+[C_2]\,\in\,A_{\rk E}\(E\) \text{ after pushforwards}. \end{align} Hence we obtain a localisation \begin{align*}
&0^!_{E|_M}[C_1] + b_*\((D-D_0)\cap0^!_F[C_2]\)\ \in\ A_{0}\(M\cup D\)\\ \nonumber &\ \ \Mapsto 0^!_{E}[C_1] + b_*\(0^!_{{\mathcal{O}}(D-D_0)}0^!_{F}[C_2]\)\\ \nonumber
&\ \ \ \ \ \ \, = 0^!_{E}[C_1] + b_*\(0^!_{E|_{Bl}}[C_2]\) \stackrel{\eqref{CN12}}{=} 0^!_E[C_{\bN}]\stackrel{\eqref{FulDef}}{=} [\bN]^{\vir}\ \in\ A_0\(\bN\). \end{align*} Note that by definition of cosection localisation \cite[Section 2]{KL}, we have $$
0^!_{E|_M}[C_1]\; -\; b_*\(D_0\;\cap\;0^!_F[C_2]\)\, =\,[N]^{\vir}_{\sigma}. $$ So it remains to show that \beq{final1} b_*\(D\,\cap\,0^!_F[C_2]\)\=[\PP(N)]^{\red}. \eeq
Since $D$ does not meet $D_0$, we can restrict $F$ and $C_2$ to $Bl\take D_0 \cong N_{D/\bN}$, having \begin{align}\label{final2}
b_*\(D\,\cap\,0^!_F[C_2]\)&\=D\,\cap\,0^!_{F|_{N_{D/\bN}}}[C_2|_{N_{D/\bN}}]\\
&\=0^!_{F|_D}\(F|_D\,\cap\,[C_2|_{N_{D/\bN}}]\).\nonumber \end{align}
The restriction $[C_2|_{N_{D/\bN}}]$ is the cycle representing Behrend-Fantechi cone of $N_{D/\bN}$ $$
C_{N_{D/\bN}}\ \subset\ F|_{N_{D/\bN}} \ \subset\ E|_{N_{D/\bN}}, $$
obtained by the representative $\{E|^*_{N_{D/\bN}}\to T|^*_{N_{D/\bN}}\}=\EE_{\bN}|_{N_{D/\bN}}$.
We claim that the intersection with the infinity divisor $F|_D\subset F|_{N_{D/\bN}}$ is the Behrend-Fantechi cone of $D$, \beq{CLAIM}
F|_D\cap C_{N_{D/\bN}} \=C_D\ \subset\ F|_D\ \subset\ E|_D
\eeq obtained by the representative $\{E^*|_D\to\frac{T^*|_D}{N^*_{D/\bN}}\}=\EE_{\bN}|_D^{\mathrm{fix}}$, as cycles. It is enough to show that $C_D=C_{N_{D/\bN}}|_D$ as Deligne-Mumford stacks. But this is more or less obvious since $N_{D/\bN}$ is a bundle on $D$. Picking any local smooth embedding $D\subset {\mathcal{U}}$, we obtain a Cartesian diagram $$ \xymatrix@R=6mm{ D\,\ar@{^(->}[r]\ar@{^(->}[d]& {\mathcal{U}}\ar@{^(->}[d] \\ N_{D/\bN}\ar@{^(->}[r] & {\mathcal{U}}\times\C, } $$ by assuming $N_{D/\bN}$ is a trivial bundle on $D$ locally. Then we have \begin{align*}
&C_{D/{\mathcal{U}}}= C_{N_{D/\bN}/{\mathcal{U}}\times\C}|_D\\
&\Longrightarrow\ \left[\frac{C_{D/{\mathcal{U}}}}{T_{{\mathcal{U}}}|_D}\right]\times_{\left[\frac{E|_D}{T|_D/N_{D/\bN}}\right]}E|_D=\left[\frac{C_{N_{D/\bN}/{\mathcal{U}}\times\C}|_D}{T_{{\mathcal{U}}\times\C}|_D}\right]\times_{\left[\frac{E|_D}{T|_D}\right]}E|_D\\
&\Longrightarrow\ C_D=C_{N_{D/\bN}}|_D \end{align*} since the last equality is a gluing of the middle equality. So the claim \eqref{CLAIM} is true. By \eqref{final2}, \eqref{CLAIM}, we have \eqref{final1} $$
b_*\(D\,\cap\,0^!_F[C_2]\)\=0^!_{F|_D}[C_D]\=0^!_{h^1/h^0(\Cone(\bsigma_D))}[{\mathfrak{C}}_D]\=[\PP(N)]^{\red}. $$
\end{proof}
\section{Proofs of Theorems \ref{main1} and \ref{thm2}}\label{app} Theorem \ref{main2} implies Theorem \ref{main1} immediately. However to obtain Theorem \ref{thm2} from Theorem \ref{main2} we need a surjectivity of $h^1(\bsigma)$. We introduce one criterion to check a surjectivity.
\begin{Lemma}\label{HuHuHu} $h^1(\bsigma)$ is surjective on $D$ if and only if $h^0\(\Cone\eqref{ChainAB}\)=0$. \end{Lemma} \begin{proof} By abuse of notation, we denote by $\bsigma$ the composition $$
\bsigma\ :\ B'|_{\bN} \ \To\ B|_{\bN}\ \To\ {\mathcal{O}}_{\bN}(D). $$
Then $\bsigma|_D$ is surjective iff $$
\bsigma^*|_D\ :\ {\mathcal{O}}_D(-D)\ \To\ B^*|_D\ \To\ B'^*|_D $$ is pointwise injective iff $\ker (B^*\to A^*) \subset B^*\to B'^*$ is pointwise injective iff $h^{0}\(\Cone\eqref{ChainAB}^\vee\)=0$ at each point iff $h^0\(\Cone\eqref{ChainAB}\)=0$. \end{proof}
Lemma \ref{HuHuHu} tells us that it is enough to check if $$
h^0\(\EE^\vee_{M/S}[1]\)\ \To\ h^0\(\EE_{N/M}|_M\) $$ is onto for stable quasimaps. This map is described in \cite[Equation (3.7) and Example 1]{KO}. Following this description, the surjectivity is equivalent to the smoothness of $X$. Hence Theorem \ref{thm2} follows from Theorem \ref{main2}.
\begin{Rmk} Indeed, Theorem \ref{thm2} holds for any Calabi-Yau $3$-fold, complete intersection in the GIT quotient coming from a gauged linear sigma model studied in \cite{CFGKS}. More precisely letting $X$ be a Calabi-Yau $3$-fold, complete intersection in the GIT quotient $Y$ as the zero of a section of the bundle $V$ on $Y$, we have \begin{align*} \{e_{\mathsf{T}}\(R\pi_*f^*V\ot\t^{-1}\) & \cap [Q^{\;\varepsilon}_g(Y,d)]^{\vir}\}_{t=0} - \iota_*[Q^{\;\varepsilon}_g(X,d)]^{\vir} \\ &=(-1)^{\rank R\pi_*f^*V}p_*[\PP(N)]^{\red}\ \in\ A_0\(Q^{\;\varepsilon}_g(Y,d)\), \end{align*} if the defining equation of $X\subset Y$ (providing the defining section of $V$) has singularities only on the unstable locus of the GIT quotient $Y$. \end{Rmk}
\appendix \section{Five localisations}\label{app:A} We review the five localised invariants introduced in \cite{JT} for a quasi-projective shceme $N$ with the symmetric obstruction theory $\EE_N\to\mathbb{L}_N$, acted on by a torus $\mathsf{T} := \C^*$ with a compact fixed locus $N^\mathsf{T}$. We assume the POT is equivariant, but the symmetricity need not be preserved by $\mathsf{T}$.
\subsubsection*{The virtual signed Euler characteristic of Ciocan-Fontanine--Kapranov and Fantechi--G\"ottsche} The virtual Euler characteristic of $N^\mathsf{T}$ is defined in \cite{CK, FG} by, $$
\int_{\, [N^\mathsf{T}]^{\vir}} c\,\(\,\EE_N^\vee|_{N^\mathsf{T}}^{\mathrm{fix}}\) \ \in \ \Q. $$
The terminology {\em virtual Euler characteristic} comes from regarding $\EE_N^\vee|_{N^\mathsf{T}}^{\mathrm{fix}}$ to be the virtual tangent bundle of $N^\mathsf{T}$.
In \cite{JT}, using the virtual {\em cotangent} bundle $\EE_N|_{N^\mathsf{T}}^{\mathrm{fix}}$, the {\em virtual signed Euler characteristic} $$
e_{1'} \ := \ \int_{\, [N^\mathsf{T}]^{\vir}} c\,\(\,\EE_N|_{N^\mathsf{T}}^{\mathrm{fix}}\) \ \in \ \Q $$ is considered.
\subsubsection*{Graber--Pandharipande torus localisation} The virtual cycle $[N]^{\vir}$ has a lifting in the equivariant Chow group $A^{\mathsf{T}}_{0}\(N\)$. Then one can localise $[N]^{\vir}$ by virtual torus localisation \cite{GP}, $$ [N]^{\vir} \ = \ \iota_*\left(\frac{[N^\mathsf{T}]^{\vir}}{e_{\mathsf{T}}\(\mathsf{N}^{\vir}_{N^\mathsf{T}/N}\)}\right), \ \ \ \iota\ :\ N^\mathsf{T} \ \into\ N. $$
Here, $\mathsf{N}^{\vir}$ denotes the virtual normal bundle. The fixed (weight zero) part $\EE_N|_{N^\mathsf{T}}^{\mathrm{fix}}$ of the pullback complex $\EE_N|_{N^\mathsf{T}}$ is a perfect obstruction theory of $N^\mathsf{T}$ \cite{GP}. Thus the virtual fundamental class $[N^\mathsf{T}]^{\vir}$ is defined. Using the class $$ \frac{[N^\mathsf{T}]^{\vir}}{e_{\mathsf{T}}\(\mathsf{N}^{\vir}_{N^\mathsf{T}/N}\)} \ \in\ A_0\(N^\mathsf{T}\)[t] $$ which is a polynomial in $t$ by degree reason,
we define an invariant $$ e_1 \ :=\ \deg \left( \left\{\frac{[N^\mathsf{T}]^{\vir}}{e_{\mathsf{T}}\(\mathsf{N}^{\vir}_{N^\mathsf{T}/N}\)}\right\}_{t=0}\, \right) \ \in\ \Q. $$
\subsubsection*{Kiem--Li cosection localisation} The pairing with the Euler vector field defines a cosection $$ \sigma \ : \ \Omega_N \ \To \ {\mathcal{O}}_N $$ on the cotangent sheaf $\Omega_N$ of $N$ which is the obstruction sheaf $h^1\(\EE_N^\vee\) \cong h^0\(\EE_N\) \cong \Omega_N$. Then one can localise $[N]^{\vir}$ via cosection localisation \cite{KL} $$ [N]^{\vir} \ = \ \iota_*\, [N]^{\vir}_\sigma . $$
It defines an invariant $$ e_2 \ :=\ \deg\, [N]^{\vir}_{\sigma} \ \in\ \Z. $$
\subsubsection*{Behrend localisation} A weighted Euler characteristic weighted by the Behrend function $\nu_N$ \cite[Definition 1.4]{Be} gives rise to an invariant $$
e_{2'} \ :=\ e\(N^{\mathsf{T}}, \nu_N|_{N^{\mathsf{T}}}\) \ \in\ \Z. $$
\subsubsection*{The signed Euler characteristic} Considering $N^{\mathsf{T}}$ as a topological space, it produces an invariant $$
e_{2''} \ :=\ (-1)^{{\mathrm{rank}\,}\(\EE_N|_{N^\mathsf{T}}^{\mathrm{fix}}\)} \cdot e\(N^{\mathsf{T}}\) \ \in\ \Z. $$
\subsection*{Known results} We list here some known results about the five localised invariants.
{\bf 1.} Kiem-Li and Behrend localised invariants are the same, $e_2=e_{2'}$, by \cite[Theorem 5.20]{Ji}.
{\bf 2.} When the symmetricity of $\EE_N$ is preserved by $\mathsf{T}$, Graber-Pandharipande and Behrend localised invariants are the same, $e_1=e_{2'}$. The following is a brief explanation. In this case $[N^{\mathsf{T}}]^{\vir}$ is of degree zero and $$ e_{\mathsf{T}}\(\mathsf{N}^{\vir}_{N^\mathsf{T}/N}\) = (-1)^{{\mathrm{rank}\,}\(\mathsf{N}^{\vir}_{N^\mathsf{T}/N}\)}, \ \ \deg\,[N^{\mathsf{T}}]^{\vir} = e\(N^{\mathsf{T}}, \nu_{N^{\mathsf{T}}}\). $$ The latter comes from \cite[Theorem 4.18]{Be}. So we have $$ e_1\ =\ (-1)^{{\mathrm{rank}\,}\(\mathsf{N}^{\vir}_{N^\mathsf{T}/N}\)}e\(N^{\mathsf{T}}, \nu_{N^{\mathsf{T}}}\). $$ Then $e_1=e_{2'}$ comes from \cite[Theorem 2.4]{LQ}.
{\bf 3.} Combining the above results, we have $e_1=e_2$ when the symmetricity is preserved by $\mathsf{T}$. There is a stronger and more direct proof: by \cite[Theorem 3.5]{CKL}, we obtain an equivalence of cycles \beq{1=2} \frac{[N^\mathsf{T}]^{\vir}}{e_{\mathsf{T}}\(\mathsf{N}^{\vir}_{N^\mathsf{T}/N}\)} \ =\ [N]^{\vir}_{\sigma}. \eeq
This induces $e_1=e_2$ immediately.
Note that \eqref{1=2} does not require $N^{\mathsf{T}}$ to be compact.
{\bf 4.} For a $(-1)$-shifted cotangent bundle $N$ of a compact quasi-smooth derived scheme, $e_1=e_{1'}$ and $e_2=e_{2'}=e_{2''}$ are proven in \cite[Theorem 1.2]{JT}. However \cite[Examples 3.1 and 3.2]{JT} show that $e_1$ may not be equal to $e_2$ in general.
\noindent {\tt{[email protected]} }
\noindent Department of Mathematics, \ Imperial College London \\ London SW7 2AZ, \ United Kingdom
\end{document}
|
arXiv
|
Anne Bennett Prize
The Anne Bennett Prize and Senior Anne Bennett Prize are awards given by the London Mathematical Society.[1][2]
In every third year, the society offers the Senior Anne Bennett prize to a mathematician normally based in the United Kingdom for work in, influence on or service to mathematics, particularly in relation to advancing the careers of women in mathematics.[1]
In the two years out of three in which the Senior Anne Bennett Prize is not awarded, the society offers the Anne Bennett Prize to a mathematician within ten years of their doctorate for work in and influence on mathematics, particularly acting as an inspiration for women mathematicians.[1]
Both prizes are awarded in memory of Anne Bennett, an administrator for the London Mathematical Society who died in 2012.[3]
The Anne Bennett Prizes should be distinguished from the Anne Bennett Memorial Award for Distinguished Service of the Royal Society of Chemistry,[4] for which Anne Bennett also worked.[3]
Winners
The winners of the Anne Bennett Prize have been:
• 2015 Apala Majumdar, in recognition of her outstanding contributions to the mathematics of liquid crystals and to the liquid crystal community.[5][6]
• 2016 Julia Wolf, in recognition of her outstanding contributions to additive number theory, combinatorics and harmonic analysis and to the mathematical community.[7][8]
• 2018 Lotte Hollands, in recognition of her outstanding research at the interface between quantum theory and geometry and of her leadership in mathematical outreach activities.[9][10]
• 2019 Eva-Maria Graefe, in recognition of her outstanding research in quantum theory and the inspirational role she has played among female students and early career researchers in mathematics and physics.[11]
• 2021 Viveka Erlandsson, "for her outstanding achievements in geometry and topology and her inspirational active role in promoting women mathematicians".[12]
• 2022 Asma Hassannezhad, in recognition of her "work in spectral geometry and her substantial contributions toward the advancement of women in mathematics".[13]
The winners of the Senior Anne Bennett Prize have been:
• 2014 Caroline Series, in recognition of her leading contributions to hyperbolic geometry and symbolic dynamics, and of the major impact of her numerous initiatives towards the advancement of women in mathematics.[14][15]
• 2017 Alison Etheridge, in recognition of her outstanding research on measure-valued stochastic processes and applications to population biology; and for her impressive leadership and service to the profession.[16][17]
• 2020 Peter Clarkson, "in recognition of his tireless work to support gender equality in UK mathematics, and particularly for his leadership in developing good practice among departments of mathematical sciences".[18]
See also
• List of mathematics awards
References
1. "LMS prizes - details and regulations | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.
2. List of LMS prize winners, London Mathematical Society, retrieved 2019-07-23
3. Nixon, Fiona (2012). "LMS Obituary - Anne Bennett | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.{{cite web}}: CS1 maint: url-status (link)
4. "The Anne Bennett Memorial Award for Distinguished Service". www.rsc.org. Retrieved 2019-10-08.
5. "Citations for 2015 LMS prize winners | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.
6. "Prizes of the London Mathematical Society" (PDF), Mathematics People, Notices of the American Mathematical Society, 62 (9): 1081, October 2015
7. "2016 LMS Prize Winners | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.
8. "Prizes of the London Mathematical Society" (PDF), Mathematics People, Notices of the American Mathematical Society, 63 (9): 1064, October 2016
9. "2018 LMS Prize Winners | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.
10. "Prizes of the London Mathematical Society" (PDF), Mathematics People, Notices of the American Mathematical Society, 65 (9): 1122, October 2018
11. "2019 LMS Prize Winners | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.
12. "Anne Bennett Prize: citation for Viveka Erlandsson" (PDF). London Mathematical Society. 2021. Retrieved 2022-02-04.
13. "LMS Prize Winners 2022 | London Mathematical Society". www.lms.ac.uk. Retrieved 21 August 2022.
14. "LMS Prizes 2014 | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.
15. "Prizes of the London Mathematical Society" (PDF), Mathematics People, Notices of the American Mathematical Society, 61 (9): 1090, October 2014
16. "LMS Prizes 2017 | London Mathematical Society". www.lms.ac.uk. Retrieved 2019-10-08.
17. "Prizes of the London Mathematical Society" (PDF), Mathematics People, Notices of the American Mathematical Society, 64 (9): 1036, October 2017
18. "Senior Anne Bennett Prize citation: Peter Clarkson" (PDF). London Mathematical Society. 2020. Retrieved 2020-06-27.
Awards of the London Mathematical Society
• Anne Bennett Prize
• Berwick Prize
• De Morgan Medal
• Fröhlich Prize
• Louis Bachelier Prize
• Naylor Prize and Lectureship
• Pólya Prize
• Senior Berwick Prize
• Senior Whitehead Prize
• Shephard Prize
• Whitehead Prize
|
Wikipedia
|
Evaluate $\log_2 (4^2)$.
$\log_24=\boxed{2}$, so $\log_2(4^2) = \log_2((2^2)^2) = \log_2 (2^4) = \boxed{4}$
|
Math Dataset
|
Would "layered" radio interferometry work?
tl;dr - Is splitting up the process of interferometry as shown in the diagram possible, and if so, is it more efficient and/or easier than traditional methods?
I have been doing some research into radio interferometry, and I had a question about it - I know that we can combine the signals from multiple telescopes into one "image" using interferometry, but what if I was able to do that multiple times?
Let me describe this a bit better. Say I have nine dishes. I arrange three of them in an equilateral triangle, and then I do the same with the other 6, and then, I arrange those three groups of three into a larger equilateral triangle. Then, I combine the signals from each three of the telescopes, and then I combine the three resulting signals. Here's a diagram that might help - the blue dishes are the "telescopes", and the red and green boxes are the "processors" or where the signals would be interfered.
In principle, would this work? And, in the context of radio astronomy, would it be easier to combine or interfere only three signals rather than nine?
radio-astronomy interferometry
Calc-You-LaterCalc-You-Later
Absolutely this can work as can be seen in The Murchison Widefield Array or MWA in Western Australia. This is essentially exactly how it works. You definitely do loose information (the first stage of combining antennas significantly reduces the field of view, and the number of interferometric baselines you get is reduced).
Frequency Range 70-300 MHz
Number of Receptors 2048 dual-polarization dipoles
Number of antenna tiles 128
Number of baselines 8128
There are 32 receptors per tile (2 polarizations for $4x4=16$ elements), and $N = 128$ tiles. The number of baselines is $N(N-1)/2 = 8128$.
The advantages is a reduction in the data rates, so reduced digital processing. Note the first stage would be treated as beam forming - you cannot do interferometry between the first stage and somehow do interferometry between the results. Ie the first stage is summing the voltages (with appropriate phase corrections) and the second stage would be interferometry.
One of the tiles making up the 32T, a prototype instrument for the Murchison Widefield Array (source)
An MWA antenna consists of a four by four regular grid of dual-polarisation dipole elements arranged on a 4m x 4m steel mesh ground plane. Each antenna (with its 16 dipoles) is known as a "tile". Signals from each dipole pass through a low noise amplifier (LNA) and are combined in an analogue beamformer to produce tile beams on the sky. Beamformers sit next to the tiles in the field. The radio frequency (RF) signals from the tile-beams are transmitted to a receiver, each receiver being able to process the signals from a group of eight tiles. Receivers therefore sit in the field, close to groups of eight tiles; cables between receivers and beamformers carry data, power and control signals. Power for the receivers is provided from a central generator.
$\begingroup$ Welcome to astronomy SE! Nice answer. Would you mind editing in a link to MWA, please? $\endgroup$
– B--rian
$\begingroup$ Welcome to Stack Exchange! I've added some more/better links to MWA to better support your answer. Please feel free to edit further. Thanks! $\endgroup$
update: See @Chris' cool answer about MWA for something thad does work!
In principle, would this work?
I can't say "no in principle it could never work" but the combination in the layers loses information than it would almost certainly decrease image quality somewhat.
Is it more efficient and/or easier than traditional methods?
It depends what "more efficient" means. If the performance of your array is worse by a factor X but you saved a fraction of the money Y, is this a "more efficient" way to do science?
In an imaging optical telescope (or any imaging system including eyes) every pixel is illuminated simultaneously and directly by all areas of the aperture. From a given point in the distance a telescope will (try to) preserve the phase of all paths reaching the pixel so that the resulting intensity corresponds to the incoming power. This allows the system to obtain the best resolution.
Once you perform the interference and measure the resulting intensity, you lose the phase information forever (in conventional systems).
In the same way, in a radio telescope array usually all signals from all elements are combined together in a device called a "correlator" which in modern radio astronomy is a digital computer. Each pixel in the final image is calculated from correlations of every possible pair of elements in the array.
For example, from Alma Observatory's Correlator page:
The ALMA Main Array Correlator
To make images from millimetric wavelengths joined by multiple antennas, we need an absolutely colossal amount of computer power. Signals from each antenna pair – there are 1225 possible pairs alone in the main array of antennas (50)- should be mathematically compared billions of times per second. You would need millions of laptop computers to perform the number of operations that ALMA carries out every second! This need resulted in the construction of one of the fastest supercomputers in the world, the ALMA Correlator.
The Correlator, installed in the AOS Technical Building at an altitude of 5,000 meters above sea level, is the last component in the cosmic wavelength collection process. It is a very large data processing system, composed of four quadrants, each of which can process data from up to 504 antenna pairs. The complete Correlator has 2,912 printed circuits, 5,200 interface wires and over 20 million welding points. The Correlator is made up of Tunable Filter Bank (TFB) cards. The distribution requires four TFBs for data that arrives from a single antenna. These cards have been developed and optimized by the Bordeaux University in France.
$$1225 = \frac{50 \times 49}{2}.$$
Again, once you perform correlation, you lose the phase information.
If you did that in each branch of your diagram, you would never be able to do the subsequent correlations properly because you'd lose phase information along the way, and could therefore no longer perform correlation of every possible pair.
There may still be some lossy algorithms that allow you to do some reduced amount of imaging the way that you propose, but the point of building such a large and expensive array would be to get the maximum amount of information.
So in reality the signal from each element is heterodyned with a local oscillator (How does ALMA produce stable, mutually coherent ~THz local oscillators for all of their dishes?) to a baseband of a few GHz, digitized (Why are the ALMA receivers' ADCs only 3-bits?) and then sent along a digital fiber optic cable to the main correlator computer building, with the original phase information from each dish still intact (albeit in digital form).
Important Caveat: However, in your diagram you could imagine that each of the green elements in the top layer is a "patch" of reflector on one dish, and the combination boxes in red (middle layer) is the collecting feed horn of one dish. So phase information from within the aperture of a single dish is indeed lost forever.
In that sense yes, it does work, and the resolution of an array is limited by the distances that separate the centers of each of the dishes, and not by the size of the dishes.
To help think more about that, see How do ASKAP's focal plane phased array feeds interact with the entire array phasing?
$\begingroup$ Of the three currently linked question, only one has any answers posted, and for that one no answer has yet to be accepted. $\endgroup$
$\begingroup$ @Calc-You-Later that sentence is not incorrect, but doing that would not let you generate a final radio image of the same quality (less=easier). It would be more efficient in terms of money alone if you build a cheaper system, but you will lose performance. If you need or want that lost performance, then it would not be more efficient because zero results divided by lots of money spent is zero efficiency. It's possible that you might be better off hurting your results by using fewer dishes than hurting your results by using localized pre-processing, at least that's my understanding. $\endgroup$
$\begingroup$ @Calc-You-Later However, if your preprocessing boxes are local correlators and you pass along not only the raw data and its phase information, but also some of the local correlation results by increasing the bandwidth of the interconnects, that might work, but it's not any cheaper and may be more expensive. It would be basically moving some processors from the central correlators out towards the edges; you don't save anything and you introduce more complexity and more long-distance high-bandwidth connections. $\endgroup$
$\begingroup$ This is all very helpful and makes sense - thank you for the responses! I'll take those factors into account $\endgroup$
– Calc-You-Later
$\begingroup$ Ah, understood - I'll wait for other answers. $\endgroup$
Not the answer you're looking for? Browse other questions tagged radio-astronomy interferometry or ask your own question.
Could mirrors be replaced with CCDs?
Why are the ALMA receivers' ADCs only 3-bits?
Difference between J2000, FK5 and ICRS coordinate systems? Which one does the Yale Bright Star Catalog use?
How does ALMA produce stable, mutually coherent ~THz local oscillators for all of their dishes?
Do ASKAP and ALMA have "fast dump" interferometric modes? Can they see and perhaps report Fast Radio Bursts in real time?
Why do radio telescopes, but not shorter wavelengths, have this Big Data problem?
How do ASKAP's focal plane phased array feeds interact with the entire array phasing?
Reference request (explaining) how optical correlators combine light from multiple telescopes to produce ultra-high resolution interferometric images?
What is the highest granularity focal-plane array on a dish radio telescope? Or is this the ONLY ONE?
Is radar interferometry used, or feasible, for ground based astronomy?
Is Optical VLBI theoretically feasible? If not why not?
Highest frequency that's been imaged by a radio telescope?
Has optical interferometry been done at radio frequency using heterodyning with a laser in a nonlinear material?
What is the significance of using baseline pairs in radio interferometry?
|
CommonCrawl
|
In the diagram, $\angle PQR = 90^\circ$. What is the value of $x$?
[asy]
size(100);
draw((0,1)--(0,0)--(1,0));
draw((0,0)--(.9,.47));
draw((0,.1)--(.1,.1)--(.1,0));
label("$P$",(0,1),N); label("$Q$",(0,0),SW); label("$R$",(1,0),E); label("$S$",(.9,.47),NE);
label("$2x^\circ$",(.15,.2)); label("$x^\circ$",(.32,-.02),N);
[/asy]
Since $\angle PQR=90^\circ$, then $2x^\circ+x^\circ=90^\circ$ or $3x=90$ or $x=\boxed{30}$.
|
Math Dataset
|
The property of two sets $A$ and $B$ in a topological space $X$ requiring the existence of a continuous real-valued function $f$ on $X$ such that the closures of the sets $f(A)$ and $f(B)$ (relative to the usual topology on the real line $\mathbf R$) do not intersect. For example, a space is completely regular if every closed set is separable from each one-point set that does not intersect it. A space is normal if every two closed non-intersecting subsets of it are functionally separable. If every two (distinct) one-point sets in a space are functionally separable, then the space is called functionally Hausdorff. The content of these definitions is unchanged if, instead of continuous real-valued functions, one takes continuous mappings into the plane, into an interval or into the Hilbert cube.
This page was last modified on 14 December 2017, at 22:44.
|
CommonCrawl
|
Scale-free and quantitative unique continuation for infinite dimensional spectral subspaces of Schrödinger operators
CPAA Home
A new second critical exponent and life span for a quasilinear degenerate parabolic equation with weighted nonlocal sources
September 2017, 16(5): 1707-1718. doi: 10.3934/cpaa.2017082
A direct method of moving planes for a fully nonlinear nonlocal system
Pengyan Wang and Pengcheng Niu ,
Department of Applied Mathematics, Northwestern Polytechnical University, Xi'an, Shaan xi, 710129, China
*The Corresponding author
Received September 2016 Revised January 2017 Published May 2017
In this paper we consider the system involving fully nonlinear nonlocal operators:
$ \left\{\begin{array}{ll}{\mathcal F}_{α}(u(x)) = C_{n,α} PV ∈t_{\mathbb{R}^n} \frac{F(u(x)-u(y))}{|x-y|^{n+α}} dy=v^p(x)+k_1(x)u^r(x),\\{\mathcal G}_{β}(v(x)) = C_{n,β} PV ∈t_{\mathbb{R}^n} \frac{G(v(x)-v(y))}{|x-y|^{n+β}} dy=u^q(x)+k_2(x)v^s(x),\end{array}\right.$
$0<α, β<2, $
$p, q, r, s>1, $
$k_1(x), k_2(x)\geq0.$
A narrow region principle and a decay at infinity are established for carrying on the method of moving planes. Then we prove the radial symmetry and monotonicity for positive solutions to the nonlinear system in the whole space. Furthermore non-existence of positive solutions to the system on a half space is derived.
Keywords: Fully nonlinear nonlocal operator, narrow region principle, decay at infinity, method of moving planes, non-existence.
Mathematics Subject Classification: 35J45, 35J60, 45G05, 45G15.
Citation: Pengyan Wang, Pengcheng Niu. A direct method of moving planes for a fully nonlinear nonlocal system. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1707-1718. doi: 10.3934/cpaa.2017082
C. Brandle, E. Colorado, A. de Pablo and U. Sanchez, A concaveconvex elliptic problem involving the fractional Laplacian, Proc Royal Soc. of Edinburgh, 143 (2013), 39-71. doi: 10.1017/S0308210511000175. Google Scholar
K. Bogdan, T. Kulczycki and A. Nowak, Gradient estimates for harmonic and q-harmonic functions of symmetric stable processes, Illinois J. Math., 46 (2002), 541-556. Google Scholar
L. Cao and W. Chen, Liouville type theorems for poly-harmonic Navier problems, Discrete Contin. Dyn. Syst., 33 (2013), 3937-3955. doi: 10.3934/dcds.2013.33.3937. Google Scholar
L. Cao and Z. Dai, A Liouville-type theorem for an integral system on a half-space, J. Inequal. Appl., 1 (2013), 1-9. doi: 10.1186/1029-242X-2013-37. Google Scholar
L. Caffarelli and L. Silvestre, An extension problem related to the fraction Laplacian, Comm. PDE., 32 (2007), 1245-1260. doi: 10.1080/03605300600987306. Google Scholar
L. Caffarelli and L. Silvestre, Regularity theory for fully nonlinear integro-differential equations, Comm. Pure. Appl. Math., 62 (2009), 597-638. doi: 10.1002/cpa.20274. Google Scholar
W. Chen, C. Li and G. Li, Maximum principles for a fully nonlinear fractional order equation and symmetry of solutions, Calc. Var. Partial Differential Equations, accepted, 2016. doi: 10.1007/s00526-017-1110-3. Google Scholar
W. Chen, C. Li and Y. Li, A drirect method of moving planes for the fractional Laplacian, Adv. Math., 308 (2017), 404-437. doi: 10.1016/j.aim.2016.11.038. Google Scholar
W. Chen, C. Li and B. Ou, Classification of solutions for an integral equation, Comm. Pure Appl. Math., 59 (2006), 330-343. doi: 10.1002/cpa.20116. Google Scholar
W. Chen, C. Li and B. Ou, Qualitative properties of solutions for an integral equation, Discrete Contin. Dyn. Syst., 12 (2005), 347-354. Google Scholar
W. Chen, Y. Fang and R. Yang, Loiuville theorems involving the fractional Laplacian on a half space, Adv. Math., 274 (2015), 167-198. doi: 10.1016/j.aim.2014.12.013. Google Scholar
W. Chen and J. Zhu, Indefinite fractional elliptic problem and Liouville theorems, J. Dif. Equa., 260 (2016), 4758-4785. doi: 10.1016/j.jde.2015.11.029. Google Scholar
R. L. Frank and E. H. Lieb, Inversion positivity and the sharp Hardy-Littlewood-Sobolev inequality, Cal. Var., 39 (2009), 85-99. doi: 10.1007/s00526-009-0302-x. Google Scholar
F. Hang, On the integral systems related to Hardy-Littlewood-Sobolev inequality, Mathematical Research Letters, 14 (2007), 373-383. doi: 10.4310/MRL.2007.v14.n3.a2. Google Scholar
F. Hang, X. Wang and X. Yan, An integral equation in conformal geometry, Ann. H. Poincaré Nonl. Anal., 26 (2009), 1-21. doi: 10.1016/j.anihpc.2007.03.006. Google Scholar
X. Han, G. Lu and J. Zhu, Characterization of balls in terms of Bessel-potential integral equation, J. Diff. Equa., 252 (2012), 1589-1602. doi: 10.1016/j.jde.2011.07.037. Google Scholar
Y. Li and P. Ma, Symmetry of solutions for a fractional system, arXiv: 1604.01465v2. Google Scholar
D. Li and R. Zhuo, An integral equation on half space, Proc. Amer. Math. Soc., 138 (2010), 2779-2791. doi: 10.1090/S0002-9939-10-10368-2. Google Scholar
G. Lu and J. Zhu, An overdetermined problem in Riese-potential and fractional Laplacian, Nonlinear Anal., 75 (2012), 3036-3048. doi: 10.1016/j.na.2011.11.036. Google Scholar
G. Lu and J. Zhu, Symmetry and regularity of extremals of an integral equation related to the Hardy-Sobolev inequality, Cal. Var., 42 (2011), 563-577. doi: 10.1007/s00526-011-0398-7. Google Scholar
Y. Lei, C. Li and C. Ma, Asymptotic radial symmetry and growth estimates of positive solutions to weighted Hardy-Littlewood-Sobolev system of integral equations, Cal. Var., 45 (2012), 43-61. doi: 10.1007/s00526-011-0450-7. Google Scholar
L. Ma and D. Chen, A Liouville type theorem for an integral system, Comm. Pure Appl. Anal., 5 (2006), 855-859. doi: 10.3934/cpaa.2006.5.855. Google Scholar
L. Ma and L. Zhao, Classification of positive solitary solutions of the nonlinear choquard equation, Arch. Ration. Mech. Anal., 195 (2010), 455-467. doi: 10.1007/s00205-008-0208-3. Google Scholar
J. Li, Monotonicity and symmetry of fractional Lane-Emden-type equation in the parabolic domain, Complex Var. Elliptic Equ., 62 (2017), 135-147. doi: 10.1080/17476933.2016.1208185. Google Scholar
P. Wang and M. Yu, Solutions of fully nonlinear nonlocal systems, J. Math. Anal. Appl.(2), 450 (2017), 982-995. Google Scholar
R. Zhuo, W. Chen, X. Cui and Z. Yuan, Symmetry and non-existence of solutions for a nonlinear system involving the fractional Laplacian, Discrete Contin. Dyn. Syst., 36 (2016), 1125-1141. doi: 10.3934/dcds.2016.36.1125. Google Scholar
Yuxia Guo, Shaolong Peng. A direct method of moving planes for fully nonlinear nonlocal operators and applications. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020462
Zedong Yang, Guotao Wang, Ravi P. Agarwal, Haiyong Xu. Existence and nonexistence of entire positive radial solutions for a class of Schrödinger elliptic systems involving a nonlinear operator. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020436
Jing Zhou, Cheng Lu, Ye Tian, Xiaoying Tang. A SOCP relaxation based branch-and-bound method for generalized trust-region subproblem. Journal of Industrial & Management Optimization, 2021, 17 (1) : 151-168. doi: 10.3934/jimo.2019104
Zhilei Liang, Jiangyu Shuai. Existence of strong solution for the Cauchy problem of fully compressible Navier-Stokes equations in two dimensions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020348
Xing Wu, Keqin Su. Global existence and optimal decay rate of solutions to hyperbolic chemotaxis system in Besov spaces. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021002
Leilei Wei, Yinnian He. A fully discrete local discontinuous Galerkin method with the generalized numerical flux to solve the tempered fractional reaction-diffusion equation. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020319
Isabeau Birindelli, Françoise Demengel, Fabiana Leoni. Boundary asymptotics of the ergodic functions associated with fully nonlinear operators through a Liouville type theorem. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020395
Serge Dumont, Olivier Goubet, Youcef Mammeri. Decay of solutions to one dimensional nonlinear Schrödinger equations with white noise dispersion. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020456
Xiu Ye, Shangyou Zhang, Peng Zhu. A weak Galerkin finite element method for nonlinear conservation laws. Electronic Research Archive, 2021, 29 (1) : 1897-1923. doi: 10.3934/era.2020097
Ke Su, Yumeng Lin, Chun Xu. A new adaptive method to nonlinear semi-infinite programming. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2021012
Izumi Takagi, Conghui Zhang. Existence and stability of patterns in a reaction-diffusion-ODE system with hysteresis in non-uniform media. Discrete & Continuous Dynamical Systems - A, 2020 doi: 10.3934/dcds.2020400
Vo Van Au, Mokhtar Kirane, Nguyen Huy Tuan. On a terminal value problem for a system of parabolic equations with nonlinear-nonlocal diffusion terms. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1579-1613. doi: 10.3934/dcdsb.2020174
Karoline Disser. Global existence and uniqueness for a volume-surface reaction-nonlinear-diffusion system. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 321-330. doi: 10.3934/dcdss.2020326
Giuseppe Capobianco, Tom Winandy, Simon R. Eugster. The principle of virtual work and Hamilton's principle on Galilean manifolds. Journal of Geometric Mechanics, 2021 doi: 10.3934/jgm.2021002
Chihiro Aida, Chao-Nien Chen, Kousuke Kuto, Hirokazu Ninomiya. Bifurcation from infinity with applications to reaction-diffusion systems. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3031-3055. doi: 10.3934/dcds.2020053
Pengyan Wang Pengcheng Niu
|
CommonCrawl
|
Redox Reaction of Multivalent Ions in Glass Melts
Kim, Kidong (Department of Materials Science and Engineering, Kunsan National University)
https://doi.org/10.4191/kcers.2015.52.2.83
The redox reaction $M^{(x+n)+}+\frac{n}{2}O^{-2}{\rightleftarrows}M^{x+}+\frac{n}{4}O_2$ of multivalent ions in glass melts influences the melting process and final properties of the glass including the fining (removal of bubbles), infrared absorption and homogenization of melts, reaction between metal electrodes and melts or refractory and melts, and transmission and color of glass. In this review paper, the redox behaviors that occur frequently in the glass production process are introduced and the square wave voltammetry (SWV) is described in detail as an in situ method of examining the redox behavior of multivalent ions in the melt state. Finally, some voltammetry results for LCD glass melts are reviewed from the practical viewpoint of SWV.
Redox reaction;Multivalent element;Glass melt;Voltammetry
Supported by : National Research Foundation of Korea (NRF)
K. Kim, H. Kim, and J. Kim, "Behavior of Oxygen Equilibrium Pressure in CRT Display Glass Melts Doped with Sb and Ce Ions from the Viewpoint of Fining," J. Korean Ceram. Soc., 44 [8] 419-23 (2007). https://doi.org/10.4191/KCERS.2007.44.8.419
K. Kim and Y. Kim, "Voltammetric Approach to Redox Behavior of Various Elements in Cathode Ray Tube Glass Melts," J. Non-Cryst. Solids, 354 [2-9] 553-57 (2008). https://doi.org/10.1016/j.jnoncrysol.2007.07.064
K. Kim, Y. Kim, H. Jung, and Y. Kim, "Voltammetric Approach to Redox Behavior of Various Elements in CRT Glass Melts," Proceedings of the XI International Conference on the Physics of Non-Crystalline Solids, Rhodes, Greece, Oct. 25-Nov. 2 (2006).
K. Kim, H. Kim, and Y. Kim, "Redox Behavior and Diffusivity of Antimony and Cerium Ion in CRT Display Glass Melts, Proceedings of the XXI International Congress on Glass," Strasbourg, France July. 1-6 (2007).
K. Kim, "Fining of Flint Glass Melt Containing Blast Furnace Slag," J. Korean Ceram. Soc., 44 [11] 618-21 (2007). https://doi.org/10.4191/KCERS.2007.44.1.618
K. Kim, H. Kim, and Y. Kim, "Redox Behavior of Sulfur in Flint Glass Melts by Square Wave Voltammetry, Proceedings of the XXI International Congress on Glass," Strasbourg, France July. 1-6 (2007).
K. Kim and H. Kim, "Electrochemical Approach in Plasma Display Panel Glass Melts Doped with Sulfate and Sulfide I. Oxygen Equilibrium Pressure," J. Korean Ceram. Soc., 45 [2] 90-3 (2008). https://doi.org/10.4191/KCERS.2008.45.2.090
K. Kim and H. Kim, "Electrochemical Approach in Plasma Display Panel Glass Melts Doped with Sulfate and Sulfide II. Square Wave Voltammetry," J. Korean Ceram. Soc., 45 [7] 375-79 (2008). https://doi.org/10.4191/KCERS.2008.45.7.375
K. Kim, "Iron Redox Equilibrium and Diffusivity in Mixed Alkali-alkaline Earth-silica Glass Melts," Ceram.-Silik., 55 [1] 54-8 (2011).
K. Kim and H. Kim, "Redox Behavior of Sn and S in Alkaline Earth Borosilicate Glass Melts with 1 mol% $Na_2O$," J. Korean Ceram. Soc., 46 [3] 271-74 (2009). https://doi.org/10.4191/KCERS.2009.46.3.271
K. Kim, "Fining Behavior in Alkaline Earth Aluminoborosilicate Melts Doped with $As_2O_5$ and $SnO_2$," J. Am. Ceram. Soc., 96 [3] 781-86 (2013). https://doi.org/10.1111/jace.12188
K. Kim, H. Kim, and Y. Kim, "Square Wave Voltammetry in Cathode Ray Tube Glass Melt Containing Different Polyvalent Ions(in Korean)," J. Korean Ceram. Soc., 44 [6] 297-302 (2007). https://doi.org/10.4191/KCERS.2007.44.6.297
K. Kim and K. Seo, "Influence of Nitrate on Fining and Sn Redox in Alkali free Aluminoborosilicate Glass Melts," Glass Technol.: Eur. J. Glass Sci. Technol. A, 54 165-68 (2013).
A. W. M. Wondergem-de Best, "Redox Behaviour and Fining of Molten Glass," pp. 5-20, in Ph.D. Thesis, Technical University Eindhoven, Eindhoven, 1994.
H. Bach, F. G. K. Baucke, and D. Krause, Electrochemistry of Glasses and Glass Melts, Including Glass Electrodes; pp. 269-89, Springer-Verlag, Berlin, Heidelberg, 2001.
A. Paul, Chemistry of Glasses, 2nd Ed. pp. 219-45, Chapman and Hall, London, New York, 1990.
C. R. Bamford, Color Generation and Control in Glass; pp. 77-86, Elsevier Scientific Publishing Co., Amsterdam, Oxford, New York, 1977.
J. Hlavac, Glass Science and Technology, Vol. 4, The Technology of Glass and Ceramics, An Introduction; pp. 111-19, Elsevier Scientific Publishing Co., Amsterdam, Oxford, New York, 1983.
E. L. Swarts, Gases in Glass. Proceedings of the 46th Conference on Glass Problems, pp. 62-75, Urbana (USA) 1985. Ed. by C. G. Bergeron, The American Ceramic Society, 1986.
F. E. Woolley, Melting/fining, Engineered Materials Handbook; Vol. 4, Ceramics and Glasses, pp. 386-93, The Materials Information Society, ASM Internatinal, 1992.
O. Corumluoglu and E. Guadagnino, "Determination of Ferrous Iron and Total Iron in Glass by a Colorimetric Method," Glass Technol., 40 [1] 24-8 (1999).
T. Tran and M. P. Brungs, "Application of Oxygen Electrodes in Glass Melts, Part 1. Oxygen Reference Electrode," Phys. Chem. Glasses, 21 [4] 133-40 (1980).
C. Russel, R. Kohl, and H. Schaffer, "Interaction Between Oxygen Activity of $Fe_2O_3$ Doped Soda-lime-silica Glass Melts and Physically Dissolved Oxygen," Glastech. Ber., 61 [8] 209-13 (1988).
O. Lafroukhi, J. Hertz, J. P. Hilger, and G. Cornier, "Electrochemical Measurement of Oxygen Activity in Lead Glass by Means of a Stabilized $ZrO_2$ Sensor, Part 2. Determination of the Equilibrium Constants in the Redox Systems Arsenic and Antimony," Glastech. Ber., 64 [11] 281-90 (1991).
J. P. Hilger and O. Lafroukhi, "Electrochemical Measurement of Oxygen Activity in Lead Glass by Means of a Stabilized $ZrO_2$ Sensor, Part 3. Measurement of the Diffusion Coefficient of Oxygen," Glastech. Ber., 64 [12] 299-304 (1991).
T. Hayashi and W. G. Dorfeld, "Electrochemical Study of $As^{3+}/As^{5+}$ Equilibrium in a Barium Borosilicate Glass Melt," J. Non-Cryst. Solids, 177 331-39 (1994). https://doi.org/10.1016/0022-3093(94)90547-9
M. Yamashita and H. Yamanaka, "Oxygen Activity Change in Soda-lime-silica Glass Melts with or without Refining Agent," Glastech. Ber. Glass Sci. Technol., 70 [12] 371-74 (1997).
F. G. K. Baucke, "Electrochemical Cells for On-line Measurements of Oxygen Fugacities in Glass-forming Melts," Glastech. Ber., 61 [4] 87-90 (1988).
H. Muller-Simon and K. W. Mergler, "Electrochemical Measurement of Oxygen Activity of Glass Melts in Glass Melting Furnaces," Glastech. Ber., 61 [10] 293-99 (1988).
M. Zink, C. Russel, H. Muller-Simon, and K. W. Mergler, "Voltammetric Sensor for Glass Tanks," Glastech. Ber. Glass Sci. Technol., 65 [2] 25-31 (1992).
H. Muller-Simon and K. W. Mergler, "On-line Determination of the Iron Concentration in Industrial Amber Glass Melts," Glastech. Ber. Glass Sci. Technol., 68 [9] 273-77 (1995).
F. G. K. Baucke, R. D. Werner, H. Muller-Simon, and K. W. Mergler, "Application of Oxygen Sensors in Industrial Glass Melting Tanks," Glastech. Ber. Glass Sci. Technol., 69 [3] 57-63 (1996).
H. Muller-Simon, "Temperature Dependence of the Redox State of Iron and Sulfur in Amber Glass Melts," Glastech. Ber. Glass Sci. Technol., 70 [12] 389-91 (1997).
K. Takahashi and Y. Miura, "Electrochemical Studies on Diffusion and Redox Behavior of Various Metal Ions in Some Molten Glasses," J. Non-Cryst. Solids, 38 & 39 527- 32 (1980).
K. Takahashi and Y. Miura, "Electrochemical Studies on Ionic Behavior in Molten Glasses," J. Non-Cryst. Solids, 80 11-9 (1986). https://doi.org/10.1016/0022-3093(86)90375-3
E. Fruede and C. Russel, "Voltammetric Methods for Determining Polyvalent Ions in Glass Melts," Glastech. Ber., 60 [6] 201-04 (1987).
C. Montel, C. Russel, and E. Freude, "Square-wave Voltammetry as a Method for the Quantitative In-situ Determination of Polyvalent Elements in Molten Glass," Glastech. Ber., 61 [3] 59-63 (1988).
C. Russel and E. Freude, "Voltammetric Studies of the Redox Behaviour of Various Multivalent Ions in Soda-lime-silica Glass Melts," Phys. Chem. Glasses, 30 [2] 62-8 (1989).
C. Russel and E. Freude, "Voltammetric Studies in a Sodalime- silica Glass Melt Containing Two Different Polyvalent Ions," Glastech. Ber., 63 [6] 149-53 (1990).
E. Freude and C. Russel, "Iron in Glass Melts - A Voltammetric Investigation," Glastech. Ber., 63 193-97 (1990).
T. Kordon, C. Russel, and E. Freude, "Voltammetric Investigations in $Na_2SO_4$-refined Soda-lime-silica Glass Melts," Glastech. Ber., 63 [8] 213-18 (1990).
C. Russel, "The Electrochemical Behavior of Some Polyvalent Elements in a Soda-lime-silica Glass Melts," J. Non- Cryst. Solids, 119 303-09 (1990). https://doi.org/10.1016/0022-3093(90)90303-4
C. Russel and G. Sprachmann, "Electrochemical Methods for Investigations in Molten Glass, Illustrated at Iron- and Arsenic-doped Soda-lime Silica Glass Melts," J. Non-Cryst. Solids, 127 197-206 (1991). https://doi.org/10.1016/0022-3093(91)90143-T
O. Claussen and C. Russel, "Quantitative In-situ Determination of Iron in a Soda-lime-silica Glass Melt with the Aid of Square-wave Voltammetry," Glastech. Ber., 69 [4] 95-100 (1996).
C. Russel, "EPR and Voltammetric Studies of Iron-containing Mixed Alkali Glasses with the Basic Composition $xNa_2O(16-x)K2O10CaO74SiO_2$," Glastech. Ber. Glass Sci. Technol., 70 17-22 (1997).
O. Claussen and C. Russel, "Voltammetry in a Sulfur and Iron-containing Soda-lime-silica Glass Melt," Glastech. Ber. Glass Sci. Technol., 70 231-37 (1997).
O. Claussen and C. Russel, "Thermodynamics of Some Transition Metal Ions in a Borosilicate Glass Melt," Phys. Chem. Glasses, 38 227-31 (1997).
A. Matthai, D. Ehrt, and C. Russel, "Redox Behavior of Polyvalent Ions in Phosphate Glass Melts and Phosphate Glasses," Glastech. Ber. Glass Sci. Technol., 71 187-92 (1998).
O. Claussen and C. Russel, "Votammetric Study of the Thermodynamics of the $Fe^{3+}/Fe^{2+}$ Equilibrium and the Self Diffusivity of Iron in Glasses with the Basic Composition $74SiO_2(26-x)Na_2O$xCaO," Phys. Chem. Glasses, 39 200-05 (1998).
S. Gerlach, O. Claussen, and C. Russel, "A Voltammetric Study on the Thermodynamics of the $Fe^{3+}/Fe^{2+}$-equilibrium in Alkali-lime-alumosilicate Melts," J. Non-Cryst. Solids, 248 92-8 (1999). https://doi.org/10.1016/S0022-3093(99)00103-9
O. Claussen, S. Gerlach, and C. Russel, "Self-diffusivity of Polyvalent Ions in Silicate Liquids," J. Non-Cryst. Solids, 253 76-83 (1999). https://doi.org/10.1016/S0022-3093(99)00345-2
A. Matthai, D. Ehrt, and C. Russel, "Voltammetric Investigations of the Redox Behaviour of Fe, Ni, Co and Sn Doped Glass Melts of $AR^{(R)}$ and $BK7^{(R)}$ Type," Glastech. Ber. Glass Sci. Technol., 73 [2] 33-8 (2000).
G. von der Goenna and C. Russel, "Redox Equilibria of Polyvalent Elements in Binary $Na_2OxSiO_2$ Melts," Glastech. Ber, Glass Sci. Technol., 73 [4] 105-10 (2000).
C. Russel, "Electrochemical Study on the Redox Behavior of Selenium-containing Soda-lime-silica Melts," Glastech. Ber, Glass Sci. Technol., 74 [1] 1-5 (2001).
H. Jung, K. Kim, H. Kim, and Y. Kim, "Redox Equilibrium of Antimony by Square Wave Voltammetry Method in CRT Display Glass Melts," J. Korean Ceram. Soc., 44 [1] 1-5 (2007). https://doi.org/10.4191/KCERS.2007.44.1.001
|
CommonCrawl
|
\begin{document}
\begin{frontmatter}
\title{Generalized gamma approximation with rates for~urns, walks and trees} \runtitle{Generalized gamma approximation with rates}
\begin{aug}
\author[A]{\fnms{Erol A.}~\snm{Pek\"oz}\thanksref{T1}\ead[label=e1]{[email protected]}}, \author[B]{\fnms{Adrian}~\snm{R\"ollin}\thanksref{T1}\ead[label=e2]{[email protected]}} \and \author[C]{\fnms{Nathan}~\snm{Ross}\corref{}\thanksref{T3}\ead[label=e3]{[email protected]}} \runauthor{E. A. Pek\"oz, A. R\"ollin and N. Ross} \affiliation{Boston University, National University of Singapore and University of Melbourne}
\address[A]{E. A. Pek\"oz\\ Questrom School of Business\\ Boston University \\ 595 Commonwealth Avenue\\ Boston, Massachusetts 02215\\ USA\\ \printead{e1}} \address[B]{A. R\"ollin\\ Department of Statistics\\ \quad and Applied Probability\\ National University of Singapore\\ 6 Science Drive 2\\ Singapore 117546\\ \printead{e2}} \address[C]{N. Ross\\ School of Mathematics and Statistics\\ University of Melbourne\\ Parkville VIC, 3010\\ Australia\\ \printead{e3}} \end{aug}
\thankstext{T1}{Supported in part by NUS Grant R-155-000-124-112.} \thankstext{T3}{Supported in part by ARC Grant DP150101459, NSF Grants DMS-07-04159, DMS-08-06118, DMS-11-06999 and ONR Grant N00014-11-1-0140.}
\received{\smonth{9} \syear{2013}}
\revised{\smonth{2} \syear{2015}}
\begin{abstract} We study a new class of time inhomogeneous P\'olya-type urn schemes and give optimal rates of convergence for the distribution of the properly scaled number of balls of a given color to nearly the full class of generalized gamma distributions with integer parameters, a class which includes the Rayleigh, half-normal and gamma distributions. Our main tool is Stein's method combined with characterizing the generalized gamma limiting distributions as fixed points of distributional transformations related to the equilibrium distributional transformation from renewal theory. We identify special cases of these urn models in recursive constructions of random walk paths and trees, yielding rates of convergence for local time and height statistics of simple random walk paths, as well as for the size of random subtrees of uniformly random binary and plane trees. \end{abstract}
\begin{keyword}[class=AMS] \kwd[Primary ]{60F05} \kwd{60C05} \kwd[; secondary ]{60E10} \kwd{60K99} \end{keyword}
\begin{keyword} \kwd{Generalized gamma distribution} \kwd{P\'olya urn model} \kwd{Stein's method} \kwd{distributional transformations} \kwd{random walk} \kwd{random binary trees} \kwd{random plane trees} \kwd{preferential attachment random graphs} \end{keyword}
\end{frontmatter}
\section{Introduction}\label{sec11}
Generalized gamma distributions arise as limits in a variety of combinatorial settings involving random trees [e.g.,~\citet {Janson2006a}, \citet{Meir1978} and \citet {Panholzer2004}], urns [e.g.,~\citet{Janson2006}], and walks [e.g.,~\citet{Chung1976}, \citet{Chung1949} and \citet{Durrett1977}]. These distributions are those of gamma variables raised to a power and noteworthy examples are the Rayleigh and half-normal distributions. We show that for a family of time inhomogeneous generalized P\'olya urn models, nearly the full class of generalized gamma distributions with integer parameters appear as limiting distributions, and we provide optimal rates of convergence to these limits. Apart from some special cases, both the characterizations of the limit distributions and the rates of convergence are new.
The result for our urn model (Theorem~\ref{thm1} below) follows from a general approximation result (Theorem~\ref{thm4} below) which provides a framework for bounding the distance between a generalized gamma distribution and a distribution of interest. This result is derived using Stein's method [see \citet{Ross2011}, \citet{Ross2007} and \citet{Chen2011} for overviews] coupled with characterizing the generalized gamma distributions as unique fixed points of certain distributional transformations. Similar approaches to deriving approximation results have found past success for other distributions in many applications: the size-bias transformation for Poisson approximation by \citet {Barbour1992}, the zero-bias transformation for normal approximation by \citeauthor{Goldstein1997} (\citeyear{Goldstein1997,Goldstein2005a}) [and a discrete analog of~\citet {Goldstein2006}], the equilibrium transformation of renewal theory for both exponential and geometric approximation, and an extension to negative binomial approximation by~\citet {Pekoz2011}, \citet{Pekoz2013a} and \citet{Ross2013}, and a transformation for a class of distributions arising in preferential attachment graphs by \citet{Pekoz2013}. \citet{Luk1994} and \citet{Nourdin2009} developed Stein's method for gamma approximation, though the approaches there are quite different from ours. Theorem~\ref{thm4} is a significant generalization and embellishment of this previous work.
Using the construction of \citet{Remy1985} for generating uniform random binary trees, we find some of our urn distributions embedded in random subtrees of uniform binary trees and plane trees. Moreover, a well-known bijection between binary trees and Dyck paths yields analogous embeddings in some local time and height statistics of random walk. By means of these embeddings, we are able to prove convergence to generalized gamma distributions with rates for these statistics. These limits and in general the connection between random walks, trees and distributions appearing in Brownian motion are typically understood through classical bijections between trees and walks along with Donsker's invariance principle, or through the approach of Aldous' continuum random tree; see \citet{Aldous1991}. While these perspectives are both beautiful and powerful, the mathematical details are intricate and they do not provide rates of convergence. In this setting, our work can be viewed as a simple unified approach to understanding the appearance of these limits in the tree-walk context which has the added benefit of providing rates of convergence.
In the remainder of the \hyperref[sec11]{Introduction}, we state our urn, tree and walk results in detail.
\subsection{Generalized gamma distribution}
For $\alpha>0$, denote by ${\mathrm{G}}(\alpha)$ the gamma distribution with shape parameter $\alpha$ having density $x^{\alpha-1} e^{-x}/\Gamma(\alpha)\,dx$, $x>0$.
\begin{definition}[(Generalized gamma distribution)] For positive real numbers $\alpha$ and $\beta$, we say a random variable $Z$ has the \emph{generalized gamma distribution with parameters} $\alpha$ \emph{and} $\beta$ and write $Z\sim\operatorname{GG}(\alpha,\beta)$, if $Z\stackrel{\mathscr{D}}{=}X^{1/\beta}$, where $X\sim{\mathrm{G}} (\alpha/\beta)$. \end{definition}
The density of $Z\sim\operatorname{GG}(\alpha,\beta)$ is easily seen to be
\[ \varphi_{\alpha,\beta}(x) = \frac{\beta x^{\alpha-1}e^{-x^\beta }}{\Gamma(\alpha/\beta)} \,dx,\qquad x>0, \]
and for any real $p>-\alpha$, $\mathbb{E} Z^p = \Gamma((\alpha+p)/\beta )/\Gamma(\alpha/\beta)$; in particular $\mathbb{E} Z^\beta=\alpha/\beta$. The generalized gamma family includes the Rayleigh distribution, $\operatorname{GG} (2,2)$, the absolute or ``half'' normal distribution, $\operatorname{GG}(1,2)$, and the standard gamma~distribution, $\operatorname{GG}(\alpha, 1)$.
\subsection{P\'olya urn with immigration}
We now define a variation of P\'olya's urn. An urn starts with black and white balls and draws are made sequentially. After each draw, the ball is replaced and another ball of the same color is added to the urn. Also, after every $l$th draw an additional black ball is added to the urn. Let ${\mathcal{P}}^l_n(b,w)$ denote the distribution of the number of white balls in the urn after $n$ draws have been made when the urn starts with $b\geq0$ black balls and $w>0$ white balls. Note that for the case $l=1$ the process is time homogeneous but for $l\geq2$ it is time inhomogeneous. Define the \emph{Kolmogorov distance} between two cumulative distribution functions $P$ and $Q$ (or their respective laws) as
\[ d_{\mathrm{K}}(P,Q)=\sup_x\bigl\vert P(x)-Q(x)\bigr\vert. \]
The Kolmogorov metric is a standard and natural metric for random variables on the real line and is used for statistical inference, for example, in computing \mbox{``$p$-}values''.
\begin{theorem}\label{thm1} Let $l,w\geq1$ and let $N_n \sim{\mathcal{P}}^{l}_{n}(1,w)$. Then $\IEN_n^k\asymp n^{kl/(l+1)}$ as $n\to\infty$ for any integer $k\geq0$, and
\begin{eqnarray*} &&\IEN_n^{l+1}\sim n^l w \biggl( \frac{l+1}{l} \biggr)^{l}. \end{eqnarray*}
Furthermore, there are constants $c =c_{l,w}$ and $C =C_{l,w}$, independent of $n$, such that
\begin{equation} \label{1} c n^{-l/(l+1)}\leqd_{\mathrm{K}}\bigl(\mathscr{L}(N_{n}/ \mu_{n}), \operatorname{GG}(w,l+1) \bigr) \leq C n^{-l/(l+1)}, \end{equation}
where
\begin{equation} \label{2} \mu_n=\mu_n(l, w)= \biggl( \frac{l+1}{w} \IEN_n^{l+1} \biggr)^{1/(l+1)} \sim n^{l/(l+1)}\frac{(l+1)}{l^{l/(l+1)}}. \end{equation}
\end{theorem}
\begin{remark} A direct application of this result is to a preferential attachment random graph model [see \citet{Barabasi1999}, \citet{Pekoz2013}] that initially has one node having weight $w$ (thought of as the degree of that node or a collection of nodes grouped together). Additional nodes are added sequentially and when a node is added it attaches $l$ edges, one at a time, directed {from} it {to} either itself or to nodes in the existing graph according to the following rule. Each edge attaches to a potential node with chance proportional to that node's weight at that exact moment, where incoming edges contribute weight one to a node and each node other than the initial node is born having initial weight one. The case where $l=1$ is the usual Barabasi--Albert tree with loops (though started from a node with initial weight $w$ and no edges). A moment's thought shows that after an additional $n$ edges have been added to the graph, the total weight of the initial node has distribution ${\mathcal{P}}^{l}_{n}(1,w)$. \citet{Pekoz2014} extend the results of this paper in this preferential attachment context to obtain limits for joint distributions of the weights of nodes. \end{remark}
\begin{remark}\label{rem1} Theorem~\ref{thm1} in the case when $l=1$ is covered by Example~3.1 of \citet{Janson2006}, but without a rate of convergence. The limit and rate for the two special cases where $w=l=1$ and $l=1$, $w=2$ are stated in Theorem~1.1 of \citet{Pekoz2013}; in fact the rate proved there is $n^{-1/2}\log n $ (there is an error in the last line of the proof of their Lemma~4.2), but our approach here yields the optimal rate claimed there. \end{remark}
\begin{remark} For $n\geq l$, it is clear that
\begin{equation} \label{3} {\mathcal{P}}^l_n(0,w) = { \mathcal{P}}^l_{n-l}(1,w+l), \end{equation}
since, if the urn is started without black balls, the progress of the urn is deterministic until the first immigration. ${\mathcal {P}}^l_{n}(1,w)$ is more natural in the context of the proof of Theorem~\ref{thm1} but in our combinatorial applications, ${\mathcal{P}}^l_n(0,w)$ can be easier to work with and so we will occasionally apply Theorem~\ref{thm1} directly to ${\mathcal{P}}^l_n(0,w)$ via~\eqref{3}. Further, in order to easily switch between these two cases without introducing unnecessary notation or case distinctions, we define, in accordance with~\eqref{3}, ${\mathcal{P}} ^l_{-i}(1,w+l)$ to be a point mass at $ w+l-i$ for all $0\leq i\leq l$. \end{remark}
\begin{remark}\label{rem2} P\'olya urn schemes have a long history and large literature. In brief, the basic model, in which the urn starts with $w$ white and $b$ black balls and at each stage a ball is drawn at random and replaced with $\alpha$ balls of the same color, was introduced in~\citet{Eggenberger1923} as a model for disease contagion. The proportion of white balls converges almost surely to a variable having beta distribution with parameters $(w/\alpha, b/\alpha)$. A well-known embellishment [see \citet{Friedman1949}] is to replace the ball drawn along with $\alpha$ balls of the same color and $\beta$ of the other color and here if $\beta\neq0$ the proportion of white balls almost surely converges to $1/2$; and \citet{Freedman1965} proves a Gaussian limit theorem for the fluctuation around this limit.
The general case can be encoded by $(\alpha, \beta; \gamma, \delta )_{b,w}$ where now the urn starts with $b$ black and $w$ white balls and at each stage a ball is drawn and replaced; if the ball drawn is black (white), then $\alpha$ ($\gamma$) black balls and $\beta$ ($\delta$) white balls are added. As suggested by the previous paragraph, the limiting behavior of the urn can vary wildly depending on the relationship of the six parameters involved and especially the Greek letters; even the first-order growth of the number of white balls is highly sensitive to the parameters.
A useful tool for analyzing the general case is to embed the urn process into a multitype branching process and use the powerful theory available there. This was first suggested and implemented by \citet{Athreya1968} and has found subsequent success in many further works; see \citet{Janson2006} and \citet{Pemantle2007}, and references therein. An alternative approach that is especially useful when $\alpha$ or $\delta$ are negative (under certain conditions this leads to a \emph{tenable} urn) is the analytic combinatorics methods of \citet{Flajolet2005}; see also the \hyperref[sec11]{Introduction} there for further references.
Note that all of the references of the previous paragraphs regard homogeneous urn processes and so do not directly apply to the model of Theorem~\ref{thm1} with $l\geq2$. In fact, the extensive survey \citet{Pemantle2007} has only a small section with a few references regarding time dependent urn models. Time inhomogeneous urn models do have an extensive statistical literature due to the their wide usage in the experimental design of clinical trials (the idea being that it is ethical to favor experimental treatments that initially do well over those that initially do not); see \citet{Zhang2006}, \citet{Zhang2011} and \citet{Bai2002}. This literature is concerned with models and regimes where the asymptotic behavior is Gaussian. As discussed in \citet{Janson2006}, it is difficult to characterize nonnormal asymptotic distributions of generalized P\'olya urns, even in the time homogeneous case. \end{remark}
\begin{remark} There are many possible natural generalizations of the model we study here, such as starting with more than one black ball or adding more than one black ball every $l$th draw. We have restricted our study to the ${\mathcal{P}}_n^{l}(1,w)$ urn because these variations lead to asymptotic distributions outside the generalized gamma class. For example, the case ${\mathcal{P}}^1_n(b,w)$ with integer $b\geq1$ is studied in \citet{Pekoz2013}, where it is shown for $b\geq2$ the limits are powers of products of independent beta and gamma random variables. Our main purpose here is to study the generalized gamma regime carefully and to highlight the connection between these urn models and random walks and trees. \end{remark}
\subsection{Applications to sub-tree sizes in uniform binary and plane trees}
Denote by $T^p_n$ a uniformly chosen rooted plane tree with $n$ nodes, and denote by $T^b_{2n-1}$ a uniformly chosen binary, rooted plane tree with $2n-1$ nodes, that is, with $n$ leaves and $n-1$ internal nodes. It is well known that the number of such trees in both cases is the Catalan number $C_{n-1}={2n-2\choose n-1}/n$ and that both families of random trees are instances of simply generated trees; see Examples~10.1 and~10.3 of \citet{janson2012}.
For any rooted tree $T$ let $\operatorname{sp} ^{k}_{\mathrm{Leaf}}(T)$ be the number of vertices in the minimal spanning tree spanned by the root and $k$ randomly chosen distinct \emph{leaves} of $T$, and let $ \operatorname{sp}^{k}_{\mathrm{Node}}(T)$ be the number of vertices in the minimal spanning tree spanned by the root and $k$ randomly chosen distinct \emph{nodes} of $T$.
\begin{theorem}\label{thm2} Let $\mu_n(1,w)$ be as in \eqref{2} of Theorem~\ref{thm1}. Then, for any $k\geq1$,
\begin{eqnarray*} \mbox{(\textup{i})}&\quad& d_{\mathrm{K}}\bigl(\mathscr{L}\bigl({\operatorname{sp} ^{k}_{\mathrm{Leaf}}\bigl(T^b_{2n-1}\bigr)}/{ \mu_{n-k-1}(1,2k)} \bigr),\operatorname{GG}(2k,2) \bigr) = \mathrm{O}\bigl(n^{-1/2} \bigr), \\ \mbox{(\textup{ii})}&\quad& d_{\mathrm{K}}\bigl(\mathscr{L}\bigl({\operatorname{sp} ^{k}_{\mathrm{Node}}\bigl(T^b_{2n-1}\bigr)}/{ \mu_{n-k-1}(1,2k)} \bigr),\operatorname{GG}(2k,2) \bigr) = \mathrm{O}\bigl(n^{-1/2} \bigr), \\ \mbox{(\textup{iii})}&\quad & d_{\mathrm{K}}\bigl(\mathscr{L}\bigl({2\operatorname{sp} ^{k}_{\mathrm{Node}} \bigl(T^p_{n}\bigr)}/{\mu_{n-k-1}(1,2k)} \bigr), \operatorname{GG}(2k,2) \bigr) = \mathrm{O}\bigl(n^{-1/2}\log n \bigr). \end{eqnarray*}
\end{theorem}
\begin{remark} The logarithms in (iii) of the theorem and in (iii) and (iv) of the forthcoming Theorem~\ref{thm3} are likely an artifact of our analysis, specifically in the use of Lemma~\ref{lem2}. \end{remark}
\begin{remark}\label{rem3} The limits in the theorem can also be seen using facts about the Brownian continuum random tree (CRT) due to \citeauthor{Aldous1991} (\citeyear{Aldous1991,Aldous1993}). Indeed, the trees $T_{2n-1}^b$ and $T_n^p$ can\vspace*{2pt} be understood to converge in a certain sense to the Brownian CRT. The limit of the subtrees we study having $k$ leaves can be defined through the Poisson line-breaking construction as described following Theorem~7.9 of \citet{Pitman2006}:
\[ \begin{tabular}{p{300pt}} Let $0<\Theta_1< \Theta_2<\cdots$ be the points of an inhomogeneous Poisson process on $\mathbb{R}_{>0}$ of rate $t \,dt$. Break the line $[0,\infty)$ at points $\Theta_k$. Grow trees $\mathcal{T}_k$ by letting $\mathcal{T}_1$ be a segment of length $\Theta_1$, then for $k\geq2$ attaching the segment $(\Theta_{k-1}, \Theta_k]$ as a ``twig'' attached at a random point of the tree $\mathcal{T}_{k-1}$ formed from the first $k-1$ segments. \end{tabular}
\]
The length of this tree is just $\Theta_k$ which is the generalized gamma limit of the theorem (up to a constant scaling). In\vspace*{1pt} more detail, if we jointly generate the vector $\mathbf{U}_k(n):=(\operatorname{sp}^{1}_{\mathrm {Leaf}}(T^b_{2n-1}),\ldots,\operatorname{sp} ^{k}_{\mathrm{Leaf}}(T^b_{2n-1}))$ by first selecting $k$ leaves uniformly at random from $T^b_{2n-1}$, then labeling the selected leaves $1,\dots,k$, and then setting $\operatorname{sp}^{i}_{\mathrm {Leaf}}(T^b_{2n-1})$ to be the number of nodes in the tree spanned by the root and the leaves labeled $1,\dots,i$, then the CRT theory implies $n^{-1/2}\mathbf{U}_k(n)$ converges in distribution to $(\Theta_1,\ldots,\Theta_k)$; see also \citet {Pekoz2014} for a proof of this fact with a rate of convergence.
\citeauthor{Panholzer2004} [(\citeyear{Panholzer2004}), Theorem~6] provides local limit theorems for\break $\operatorname{sp}^{k}_{\mathrm{Node}}(T^b_{2n-1})$ and $\operatorname{sp}^{k}_{\mathrm{Node}}(T^p_{n})$, from which the distributional convergence to the generalized gamma can be seen. It may be possible to obtain such (and other) local limits results using our Kolmogorov bounds and the approach of \citet {Rollin2012a}, but in any case the convergence rates in the Kolmogorov metric in Theorem~\ref{thm2} appear to be new. \end{remark}
\subsection{Applications to occupation times and heights in random walk, bridge and meander}
Consider the one-dimensional simple symmetric random walk $S_n = (S_n(0),\dots,S_n(n))$ of length $n$ starting at the origin. Define
\[ L_n = \sum_{i=0}^{n} {\mathrm{I}} \bigl[S_n (i) =0\bigr] \]
to be the number of times the random walk visits the origin by time $n$. Let
\[ L^b_{2n} \sim\mathscr{L}\bigl(L_{2n} | S_{2n}(0)= S_{2n}(2n)=0 \bigr) \]
be the local time of a random walk bridge, and define random walk excursion and meander by
\begin{eqnarray*} S_{2n}^e & \sim&\mathscr{L}\bigl(S_{2n} | S_{2n}(0)=0, S_{2n}(1)>0,\dots,S_{2n}(2n-1) > 0, S_{2n}(2n)=0 \bigr), \\ S_n^m & \sim&\mathscr{L}\bigl(S_n | S_{n}(0)= 0, S_{n}(1)>0,\dots,S_{n}(n) > 0 \bigr). \end{eqnarray*}
\begin{theorem}\label{thm3} Let $\mu_n(1,w) = \mu_n$ be as in \eqref{2} of Theorem~\ref{thm1} and let $K$ be uniformly distributed on $\{0,\dots,2n\}$ and independent of $(S^e_{2n}(u))_{u=0}^{2n}$. Then
\begin{eqnarray*} \mbox{(\textup{i})} &\quad& d_{\mathrm{K}}\bigl(\mathscr{L}\bigl({ L_{n} }/{\mu_{{\lfloor n/2\rfloor}}(1,1)} \bigr),\operatorname{GG}(1,2) \bigr) = \mathrm{O}\bigl(n^{-1/2} \bigr), \\ \mbox{(\textup{ii})} &\quad& d_{\mathrm{K}}\bigl(\mathscr{L}\bigl({ L^b_{2n} }/{\mu _{n-1}(1,2)} \bigr),\operatorname{GG}(2,2) \bigr) = \mathrm{O}\bigl(n^{-1/2} \bigr), \\ \mbox{(\textup{iii})} &\quad& d_{\mathrm{K}}\bigl(\mathscr{L}\bigl({2S^e_{2n}(K)}/{\mu _{n-2}(1,2)} \bigr),\operatorname{GG}(2,2) \bigr) = \mathrm{O}\bigl(n^{-1/2}\log n \bigr), \\ \mbox{(\textup{iv})} &\quad& d_{\mathrm{K}}\bigl(\mathscr{L}\bigl({ 2S^m_{n} }/{ \mu_{{\lfloor (n-1)/2\rfloor}-1}(1,2)} \bigr),\operatorname{GG}(2,2) \bigr) = \mathrm{O}\bigl(n^{-1/2} \log n \bigr). \end{eqnarray*}
\end{theorem}
\begin{remark}\label{rem4} An alternative viewpoint of the limits in Theorem~\ref{thm3} is that they are the analogous statistics of Brownian motion, bridge, meander and excursion which can be read from \citet{Chung1976} and \citet{Durrett1977}; these Brownian fragments are the weak limits in the path space $C[0,1]$ of the walk fragments we study; see \citet{Csaki1981}. For example, if $B_t$, $t\geq0$, is a standard Brownian motion and $(L_t^x, t\geq0,x\in\mathbb{R})$ its local time at level $x$ up to time $t$, then L\'evy's identity implies that $L_1^0$ is equal in distribution to the maximum of $B_t$ up to time $1$, which is equal in distribution to a half normal distribution; see also \citet{Borodin1987}.
To check the remaining limits of the theorem [which are Rayleigh, $\operatorname{GG} (2,2)$], we can use \citet{Pitman1999b}, equation~(1) [see also \citet {Borodin1989}] which states that for $y>0$ and $b\in\mathbb{R}$,
\begin{eqnarray} \label{4} \quad && \mathbb{P}\bigl[L_1^x\in dy, B_1\in db\bigr] \nonumber\\[-8pt]\\[-8pt]\nonumber &&\qquad = \frac{1}{\sqrt{2\pi}} \bigl(\vert x\vert+\vert b-x\vert+y \bigr) \exp \biggl(-\frac{1}{2}\bigl(\vert x\vert+\vert b-x\vert+y \bigr)^2 \biggr) \,dy \,db. \end{eqnarray}
Roughly, for the local time of Brownian bridge at time $1$ we set $b=x=0$ in~\eqref{4} and multiply by $\sqrt{2\pi}$ (due to conditioning $B_1=0$) to see the Rayleigh density. For the final time of Brownian meander, we set $x=y=0$ in~\eqref{4} and multiply by $\sqrt{\pi/2}$ (due to conditioning $L_1^0=0$), and note here that $b\in\mathbb{R}$ so by symmetry we restrict $b>0$ and multiply by $2$ to get back to the Rayleigh density. Finally, due to Vervaat's transformation [\citet{Vervaat1979}], the height of standard Brownian excursion at a uniform random time has the same distribution as the maximum of Brownian bridge on $[0,1]$. If we denote by $M$ this maximum, then for $x>0$ we apply~\eqref{4} to obtain
\begin{eqnarray*} \mathbb{P}[M>x]&=&\mathbb{P}\bigl[L_1^x>0| B_1=0\bigr] = \int_0^\infty(2x+y) \exp\biggl(-\frac{1}{2} (2x+y )^2 \biggr) \,dy =e ^{-2x^2}, \end{eqnarray*}
which is the claimed Rayleigh distribution.
With the exception of the result for $L_n$, which can be read from \citet{Chung1949}, inequality~(1) or \citet{Dobler2013}, Theorem~1.2, the convergence rates appear to be new. \end{remark}
\subsection{A general approximation result via distributional transforms}
Theorem~\ref{thm1} follows from a general approximation result using Stein's method, a distributional transformation with a corresponding fixed point equation, which we describe now. We first generalize the size bias transformation used in Stein's method and appearing naturally in many places; see~\citet{Arratia2013} and \citet{Brown2006}.
\begin{definition} Let $\beta>0$ and let $W$ be a nonnegative random variable with finite $\beta$th moment. We say a random variable $W^{(\beta)}$ has the~\emph{$\beta$-power bias distribution of $W$}, if
\begin{equation} \label{5} \mathbb{E}\bigl\{W^\beta f(W) \bigr\} =\mathbb{E} W^\beta\mathbb{E} f \bigl(W^{(\beta )} \bigr) \end{equation}
for all $f$ for which the expectations exist. \end{definition}
In what follows, denote by $\mathrm{B}(a,b)$ the beta distribution with parameters \mbox{$a,b>0$}.
\begin{definition}\label{def1} Let $\alpha>0$ and $\beta>0$ and let $W$ be a positive random variable with $\mathbb{E} W^\beta= \alpha/\beta$. We\vspace*{2pt} say that $W^{*}$ has the \emph{$(\alpha,\beta)$-generalized equilibrium distribution of $W$}\/ if, for $V_\alpha\sim\mathrm{B}(\alpha,1)$ independent of $W^{(\beta)}$, we have
\begin{equation} \label{6} W^*\stackrel{\mathscr{D}}{=} V_\alpha W^{(\beta)}. \end{equation}
\end{definition}
\begin{remark}\label{7} \citet{Pakes1992}, Theorem~5.1 and \citet{Pitman2012}, Proposition~9 show that for a positive random variable $W$ with $\mathbb{E} W^\beta=\alpha/\beta$, we have $W\sim\operatorname{GG} (\alpha,\beta)$ if and only if $W\stackrel{\mathscr{D}}{=} W^*$. The $(1,2)$-generalized equilibrium distributional transformation is the nonnegative analog of the zero bias transformation of which the standard normal distributions are unique fixed points; see \citet{Chen2011}, Proposition~2.3, page~35, where the $2$-power bias transformation is appropriately called ``square'' biasing; thus $\operatorname{GG}(1,2)$ is the absolute normal distribution. \end{remark}
\begin{theorem}\label{thm4} Let $W$ be a positive random variable with $\mathbb{E} W^\beta= \alpha/\beta$ for some integers $\alpha\geq1$ and $\beta\geq1$. Let $W^*$ be a random variable constructed on the same probability space having the $(\alpha,\beta)$-generalized equilibrium distribution of~$W$. Then there is a constant $c>0$ depending only on $\alpha$ and $\beta$ such that, for all $0<b\leq1$,
\begin{equation} \label{8} d_{\mathrm{K}}\bigl(\mathscr{L}(W), \operatorname{GG}(\alpha,\beta) \bigr) \leq c \bigl(b+ \mathbb{P} \bigl[\bigl\vert W-W^*\bigr\vert>b\bigr] \bigr). \end{equation}
\end{theorem}
\begin{remark} Let $X$ and $Y$ be two random variables and let
\[ d_{\mathrm{LP}}\bigl(\mathscr{L}(X),\mathscr{L}(Y)\bigr) = \inf\bigl\{b: \mathbb{P}[X\leq t]\leq\mathbb{P} [Y\leq t+b] + b\mbox{ for all }t\in\mathbb{R}\bigr\} \]
be the L\'evy--Prokohorov distance between $\mathscr{L}(X)$ and $\mathscr{L}(Y)$. A theorem due to Strassen [see, e.g., \citet{Dudley1968}, Theorem~2] says that there is a coupling $(X,Y)$ such that $\mathbb{P}[\vert X-Y\vert >\rho]\leq\rho$, where $\rho = d_{\mathrm{LP}}(\mathscr{L} (X),\mathscr{L}(Y))$. Hence, since \eqref{8} holds for \emph{all} $b$ and \emph{all} couplings of $W$ and $W^*$, it follows in particular that
\[ d_{\mathrm{K}}\bigl(\mathscr{L}(W), \operatorname{GG}(k,r) \bigr) \leq2c d_{\mathrm{LP}}\bigl(\mathscr{L}(W),\mathscr{L}\bigl (W^*\bigr) \bigr). \]
\end{remark}
The paper is organized as follows. In Section~\ref{sec1}, we embed our urn model into random trees via R\'emy's algorithm and prove Theorem~\ref{thm2}. In Section~\ref{sec2a}, we describe the various connections between trees and walk paths and then prove Theorem~\ref{thm3}. In Section~\ref{sec2}, we use Theorem~\ref{thm4} to prove Theorem~\ref{thm1}, and finally in Section~\ref{sec3} we develop a general formulation of Stein's method for log concave densities and prove Theorem~\ref{thm4}.
\section{Random trees: Proof of Theorem~\texorpdfstring{\protect\ref{thm2}}{1.8}} \label{sec1}
\makeatletter \def\drawdot#1{\draw(#1) node [draw,circle,fill=black,inner sep=0.5pt] {}}
\tikzset{ int/.style = {circle, white, font=\bfseries, draw=black, fill=black,inner sep=0.5pt}, ext/.style = {circle, draw, inner sep=0.5pt}, arr/.style = {shorten >=1mm,shorten <=1mm}, arrd/.style = {arr,densely dotted}, arrn/.style = {sloped,pos=0.5} } \def\@intnode#1{\begin{tikzpicture}\tiny \node at (0,0) [int,inner sep=1pt,minimum size=3.5ex] {#1}; \end{tikzpicture}} \def\intnode#1{\protect\raisebox{-0.3ex}{\protect\@intnode#1}} \def\@extnode#1{\protect{\begin{tikzpicture}\tiny \node at (0,0) [ext,inner sep=1pt,minimum size=3.5ex] {#1}; \end{tikzpicture}}} \def\extnode#1{\protect\raisebox{-0.3ex}{\protect\@extnode#1}} \def\phantom{$\cdot$}{\phantom{$\cdot$}} \def\fixbox#1{\hbox to 1.1em {\hfil #1\hfil}} \makeatother
\subsection{R\'emy's algorithm for decorated binary trees}
R{\'e}my (\citeauthor{Remy1985}) introduced an elegant recursive algorithm to construct uniformly chosen decorated binary trees, where by ``decorated'' we mean that the leaves are labeled. This algorithm is the key ingredient to our approach as it relates to the urn schemes of Theorem~\ref{thm1}. All trees are assumed to be plane trees throughout, and we will think of the tree as growing downward with the root at the top. We will refer to the ``left'' and ``right'' child of a node as seen from the readers point of view looking at the tree growing downward.
\subsubsection*{R\'emy's algorithm for decorated binary trees (see Figure~\protect\ref{fig1})} Let $n\geq1$ and assume that $T^b_{2n-1}$ is a uniformly chosen decorated binary tree with $n$ leaves, labeled from $1$ to $n$. To obtain a uniformly chosen decorated binary tree $T^b_{2n+1}$ with $n+1$ leaves do the following:
\begin{longlist}
\item[\textit{Step}~1.] Choose a node uniformly at random; call it $X$. Remove $X$ and its sub-tree, insert a new internal node at this position, call it $Y$, and attach $X$ and its sub-tree, to $Y$.
\item[\textit{Step}~2.] With probability $1/2$ each, do either of the following:
\begin{longlist}[(a)]
\item[(a)] Attach new leaf with label $n+1$ as the left-child to $Y$ (making $X$ the right-child of $Y$).
\item[(b)] Attach new leaf with label $n+1$ as the right-child to $Y$ (making $X$ the left-child of $Y$). \end{longlist}
\end{longlist}
\begin{figure}
\caption{Illustration of R\'emy's algorithm to construct decorated binary trees. Internal nodes are represented by black circles, and leaves by white circles. For the sake of clarity, we keep the internal nodes labeled, but these labels will be removed in the final step. We start with Tree~A, the trivial tree. The step from Tree~A to Tree~B is \extnode{1}L, where ``\extnode{1}'' indicates the node that was chosen, and ``L'' indicates that this node, along with its sub-tree, is attached to the new node as the left-child. Using this notation, the remaining steps to get to Tree~F are \extnode{2}L, \intnode{2}L, \intnode{2}R, \extnode{1}R. Then remove the labels of the internal nodes to obtain Tree~G, the final tree.}
\label{fig1}
\end{figure}
This recursive algorithm produces uniformly chosen decorated binary trees, since every decorated binary tree can be obtained in exactly one way, and since at every iteration every new tree is chosen with equal probability. By removing the labels, we obtain a uniformly chosen undecorated binary tree.
Figure~\ref{fig1} illustrates the algorithm by means of an example. We have labeled the internal nodes to make the procedure clearer, but it is important to note that these internal labels are not chosen uniformly among all such labelings and, therefore, have to be removed at the final step (to see this, note that Tree~C in Figure~\ref{fig1} cannot be obtained through R\'emy's algorithm if the labels of the two internal nodes are switched).
\subsection{Sub-tree sizes}
\subsubsection*{Spanning trees in binary trees} R\'emy's algorithm creates a direct embedding of a P\'olya urn into a decorated binary tree. The following result is the key to our tree and walk results and is utilized via embeddings and bijections in this section and the following. The result is implicit in a construction of \citet{Pitman2006}, Exercise 7.4.11.
\begin{proposition}\label{prop1} For any $n\geq k\geq1$,
\[ \operatorname{sp}^{k}_{\mathrm{Leaf}} \bigl(T^b_{2n-1}\bigr) \sim{\mathcal{P}}^{1}_{n-k}(0,2k-1) = {\mathcal{P}} ^{1}_{n-k-1}(1,2k). \]
\end{proposition}
\begin{pf} Since the labeling is random, we may consider the tree spanned by the root and the leaves labeled $1$ to $k$ of a uniformly chosen decorated binary tree, rather than the tree spanned by the root and $k$ uniformly chosen leaves of a random binary tree, cf. \citet{Pitman2006}, Exercise~7.4.11. Start with a uniformly chosen decorated binary tree $T^b_{2k-1}$ with $k$ leaves and note that the tree spanned by the root and leaves $1$ to $k$ is the whole tree. Now identify the $2k-1$ nodes of $T^b_{2k-1}$ with $2k-1$ white balls in an urn that has no black balls. If the randomly chosen node in a given step in R\'emy's algorithm is outside the current spanning tree, two nodes will be added outside the current spanning tree and we identify this as adding two black balls to the urn. If the randomly chosen node is in the current spanning tree, one node will be added to the current spanning tree and another outside of it, and we identify this as adding one black and one white ball to the urn.
Since we started with a tree of $2k-1$ nodes, we need $n-k$ steps to obtain a tree with $2n-1$ nodes. Hence, the size of the spanning tree is equal to the number of white balls in the urn, which follows the distribution ${\mathcal {P}}^{1}_{n-k}(0,2k-1)$. \end{pf}
As a consequence of Proposition~\ref{prop1}, whenever a quantity of interest can be coupled closely to $\operatorname{sp} ^{k}_{\mathrm{Leaf}}(T^b_{2n-1})$, rates of convergence can be obtained if the closeness of the coupling can be quantified appropriately. In this section, we give two tree examples of this approach. Since the distribution ${\mathcal{P}} ^{1}_{n-k}(0,2k-1)$ will appear over and over again, we set $N^*_{j,k} \sim{\mathcal{P}}^{1}_{j}(0,2k-1)$ in what follows. We use\vspace*{1pt} the notation $\Ge_0(p)$, $\Ge_1(p)$, $\operatorname{Be}(p)$, $\operatorname{Bi}(n,p)$ to, respectively, denote the geometric with supports starting at zero and one, Bernoulli and binomial distributions. For a nonnegative integer-valued random variable $N$, we also use the notation $X\sim\operatorname{Bi}(N,p)$ to denote that $X$ is distributed as a mixture of binomial distributions such that $\mathscr{L}(X| N=n)=\operatorname{Bi}(n,p)$.
We now make a simple, but important observation about the edges in the spanning tree.
\begin{lemma}\label{lem1} Let $1\leq k\leq n$ and $T^b_{2n-1}$ be a uniformly chosen binary tree with $n$ leaves and consider the tree spanned by the root and $k$ uniformly chosen distinct leaves. Let $M_{k,n}$ be the number of edges in this spanning tree that connect a node to its left-child (\hspace*{-1pt}``left-edges''). Conditional on the spanning tree having $N_{n-k,k}^*$ nodes,
\begin{equation} \label{9} M_{k,n} - (k-1) \sim\operatorname{Bi}\bigl(N_{n-k,k}^*-(2k-1),1/2 \bigr). \end{equation}
\end{lemma}
\begin{pf} We use R\'emy's algorithm and induction over $n$. Fix $k\geq 1$. For $n=k$ note that the spanning tree is the whole tree with $N^*_{0,k}=2k-1$ nodes and $2(k-1)$ edges. Since half of the edges must connect a node to the left-child, $M_{k,k} = k-1$ which is~\eqref{9} and this proves the base case. Assume now that~\eqref{9} is true for some $n\geq k$. Two things can happen when applying R\'emy's algorithm: either the current spanning tree is not changed, in which case $N^*_{n-k+1,k} = N^*_{n-k,k}$ and $M_{k,n+1} = M_{k,n}$, and hence~\eqref{9} holds by the induction hypothesis, or one node and one edge are inserted into the spanning tree, in which case $N^*_{n-k+1,k} = N^*_{n-k,k} + 1$ and $M_{k,n+1} = M_{k,n} + J$ with $J\sim\operatorname{Be}(1/2)$ independent of all else. In the latter case, using the induction hypothesis, $M_{k,n} + J -(k-1) \sim\operatorname{Bi}(N^*_{n-k,k}-(2k-1)+1,1/2) = \operatorname{Bi}(N^*_{n-k+1,k}-(2k-1),1/2)$, which is again~\eqref{9}. This concludes the induction step. \end{pf}
\begin{proposition}\label{prop2} Let $n\geq k\geq1$ and let $N^*_{j,k}\sim{\mathcal{P}}^{1}_{j}(0,2k-1)$. There exist nonnegative, integer-valued random variables $Y_1,\dots,Y_k$ such that, for each $i$,
\begin{equation} \mathbb{P}\bigl[Y_i > m| N^*_{n-k,k}\bigr] \leq2^{-m} \qquad\mbox{for all }m\geq0,\label{10} \end{equation}
and such that for
\begin{equation} \label{12} X_{n,k}:= N^*_{n-k,k} - \sum _{i=1}^k Y_k \end{equation}
we have
\begin{equation} \label{11} d_{\mathrm{TV}}\bigl(\mathscr{L}\bigl(\operatorname{sp} ^{k}_{\mathrm {Node}} \bigl(T^b_{2n-1}\bigr) \bigr),\mathscr{L}(X_{n,k}) \bigr) \leq\frac{k}{2n}+\frac{(k-1)^2}{2n-k+1}, \end{equation}
where $d_{\mathrm{TV}}$ denotes total variation distance.
For $k=1$ we have the explicit representation
\begin{equation} \label{13} \mathscr{L}\bigl(\operatorname{sp}^{1}_{\mathrm {Node}} \bigl(T^b_{2n-1}\bigr) \bigr) = \mathscr{L}(X_{n,1} | X_{n,1} > 0), \end{equation}
where $Y_1\sim\Ge_0(1/2)$ is independent of $N^*_{n-1,1}$. \end{proposition}
\begin{pf} We first prove~\eqref{13}. We start by regarding $T^b_{2n-1}$ as being ``planted'', that is, we think of the root node as being the left-child of a ``ground node'' (which itself has no right-child). We also think of the ground node as being internal. Furthermore, we think of the minimal spanning tree between the ground node and the root node as being empty, hence its size as being $0$. We first construct a pairing between leaves and internal nodes as follows (see Figure~\ref {fig2}). Pick a leaf and follow the path from that leaf toward the ground node and pair the leaf with the first parent of a left-child encountered in that path. Equivalently, pick an internal node and, in direction away from the ground node, first follow the left child of that internal node, and then keep following the right child until reaching a leaf. In particular, with this algorithm, if a selected leaf is a left-child it is assigned directly to its parent and the right-most leaf is assigned to the ground node. The fact that this description is indeed a pairing follows inductively by considering the left and right subtrees connected to the root, whereby the left subtree uses the root of the tree as its ground node.
\begin{figure}
\caption{Pairing up leaves and internal nodes in planted binary trees. Note that the right-most leaf in the tree is paired up with the ``ground node''.}
\label{fig2}
\end{figure}
Recall that we are considering the case $k=1$. Now, instead of choosing a node uniformly at random among the $2n$ nodes of the planted tree (the ground node included), we may equivalently choose Leaf~1 with probability $1/2$, or choose the internal node paired with Leaf~1 with probability $1/2$. Denote by $X_n$ the number of nodes in the path from the chosen node to the root, denote by $J$ the indicator of the event that we choose an internal node, and denote by $N^*_{n-1,1}$ the number of nodes in the path from Leaf~1 up to the root. From Proposition~\ref {prop1} with $k=1$, we have that $N^*_{n-1,1}\sim{\mathcal{P}}^1_{n-1}(0,1)$. If $J_1=0$, then $X_{n,1} = N^*_{n-1,1}$. If $J_1=1$, the number of nodes in the path to the root is that of Leaf~1 minus the number of nodes until the first parent of a left-child in the path is encountered. Considering Lemma~\ref{lem1}, given $N^*_{n-1,1}$, the number of left-edges are $N^*_{n-1,1}$ independent coin tosses with success probability $1/2$, hence, if $\tilde Y_1$ is the time until the first parent of a left-child is encountered, we have $\tilde Y_1\sim \Ge_1 (1/2)$, truncated at $N^*_{n-1,1}$. Thus, if $J_1=1$, we have $X_{n,1} = N^*_{n-1,1} - \tilde Y_1\wedge N^*_{n-1,1}$. Putting the two cases together we obtain the representation $X_n = N^*_{n-1,1} - (J_1\tilde Y_1) \wedge N^*_{n-1,1}$, which has the same distribution as
\begin{equation} \label{14} N^*_{n-1,1} - Y_1\wedge N^*_{n-1,1}, \end{equation}
since $J_1\tilde Y_1\sim\Ge_0(1/2)$. As $X_{n,1}$ is zero if and only if the ground node was paired with Leaf~1 (i.e., Leaf~1 being the right most leaf) and $J_1=1$, conditioning on $X_{n,1}$ being positive is equivalent to conditioning on choosing any node apart from the ground node, which concludes~\eqref{13}.
Now, let $k$ be arbitrary. In a first step, instead of choosing $k$ distinct nodes at random, choose $k$ distinct leaves at random and, for each leaf, toss a fair coin $J_i$, $i=1,\dots,k$, to determine whether to choose the leaf or its internal partner, similar to the case $k=1$. Denote by $N^*_{n-k,k}$ the number of nodes in the minimal spanning tree spanned by Leaves~1 to $k$ and the root, and denote by $X_{n,k}$ the number of nodes in the minimal spanning tree spanned by the leaves or paired nodes and the root (if one of the chosen nodes is the ground node, then ignore that node in determining the minimal spanning tree). It is easy to see through two coupling arguments that choosing the nodes in this different way introduces a total variation error of at most
\[ \left[1-\pmatrix{2n-1\cr k}\Big/ \pmatrix{2n\cr k} \right] + \Biggl[1-\prod _{i=1}^{k-1} \biggl(1-\frac{i}{2n-i} \biggr) \Biggr]; \]
the first term stems from the possibility of choosing the ground node, and the second term from restricting the $k$ nodes to be from different pairings. From this, \eqref{11} easily follows.
It remains to show~\eqref{10} and~\eqref{12}. For~\eqref{12}, for each $i=1,\dots,k$, let $N^{(i)}_{n-1}$ be the number of nodes in the path from leaf $i$ up to the root, and let $Y'_i=J_i\tilde Y_i$ be the geometric random variable from the representation~\eqref{14}. With $Y_i = Y'_i \wedge N^{(i)}_{n-1}$, we hence have
\[ X_{n,k} = N^*_{n-k,k} - \sum_{i=1}^k Y'_i \wedge N^{(i)}_{n-1} = N^*_{n-k,k} - \sum_{i=1}^k Y_i. \]
It is not difficult to check that $Y_i$ and $N^*_{n-k,k} - N_{n-1}^{(i)}$ are conditionally independent given $N_{n-1}^{(i)}$. For~\eqref{10}, notice that $\mathbb{P}[Y_i>m| N_{n-1}^{(i)} ] = {\mathrm{I}} [m < N_{n-1}^{(i)} ]2^{-m}$. Hence,
\begin{eqnarray*} \mathbb{P}\bigl[Y_i>m| N_{n-k,k}^* \bigr] & =& \mathbb{E}\bigl\{\mathbb{P} \bigl[Y_i>m| N^*_{n-k,k},N_{n-1}^{(i)}-N^*_{n-k,k} \bigr]| N_{n-k,k}^* \bigr\} \\ & =& \mathbb{E}\bigl\{\mathbb{P}\bigl[Y_i>m| N^{(i)}_{n-1} \bigr]| N_{n-k,k}^* \bigr\} \leq2^{-m}. \end{eqnarray*}\upqed
\end{pf}
\subsubsection*{Uniform plane tree}
It is well known that there are $n!C_{n-1}$ decorated binary trees of size $2n-1$ as well as labeled plane trees of size $n$ nodes, where $C_1,C_2,\dots$ are the Catalan numbers. There are various ways to describe bijections between the two sets. We first give a direct algorithm to construct a plane tree from a binary tree; see Figure~\ref{fig3}.
\begin{figure}
\caption{Bijection between a decorated binary tree of size $2n-1$ (on the left), and a rooted labeled plane tree of size $n$ (on the right).}
\label{fig3}
\end{figure}
Given a binary tree, we do a depth-first exploration, starting from the root and exploring left-child before right-child. We construct the plane tree as we explore the binary tree, starting with an unlabeled root node. Whenever a left-edge in the binary tree is visited for the first time, we add one new unlabeled child to the current node in the plane tree to the right of all existing children of that node, and move to that new child. If a right-edge is visited for the first time, we move back to the parent of the current node in the plane tree. Whenever we encounter a leaf in the binary tree, we copy that label to the node in the plane tree.
Another way to describe the bijection, initially described between unlabeled objects, is by means of \emph{Dyck paths} of length $2(n-1)$. These are syntactically valid strings of $n-1$ nested bracket pairs. To go from a Dyck path to a binary tree, we parse the string from left-to-right and at the same time do a depth-first construction of the binary tree. Start with one active node. Any opening bracket corresponds to adding a left-child to the currently active node and then making that child the active node, whereas a closing bracket corresponds to adding a right-child as sibling of the left-child that belongs to the opening bracket of the current closing bracket, and then making that right child the active node. The labeling is added by inserting $n-1$ of the $n$ leaf labels in front of the $n-1$ closing brackets, as well as one label at the end of the string in any of the $n!$ possible orderings. When converting the labeled Dyck path into a binary tree, every time a label is encountered that label is copied to the currently active node in the tree. The Dyck path corresponding to the tree in Figure~\ref{fig3} would be ``$((()))(()())$'', respectively, with the labeling, ``$(((7)6)1)((5)(2)3)4$''.
To obtain a labeled plane tree from a labeled Dyck path, again do a depth-first construction, starting with one active node. An opening bracket corresponds to adding a new child to the currently active node to the right of all already present siblings and then making that child the new active node, whereas a closing bracket represents making the parent of the currently active node the new active node. If a label is encountered in the string, the label is copied to the currently active node.
\begin{proposition}\label{prop3} Let $n\geq k\geq1$ and $N^*_{j,k}\sim{\mathcal{P}}^{1}_{j}(0,2k-1)$. Assume that $X_{n,k}\sim\operatorname{Bi}(N^*_{n-k,k}-(2k-1),1/2 )$. Then
\[ \operatorname{sp}^{k}_{\mathrm{Node}} \bigl(T^p_n\bigr)~\stackrel{\mathscr{D}} {=} X_{n,k} + k. \]
\end{proposition}
\begin{pf} We use the bijection between binary and plane trees. The number of edges in the spanning tree of $k$ nodes in the plane tree is equal to the number of left-edges in the spanning tree of the corresponding $k$ leaves in the binary tree (note that in the spanning tree of the binary tree, we count left-edges both between internal nodes as well as between internal nodes and leaves). This is because only left-edges in the binary tree contribute to the number of edges in the plane tree. The proof is now a simple consequence of Proposition~\ref{prop1} and Lemma~\ref{lem1} and the fact that the number of nodes in any spanning tree is equal to one plus the number of edges in that spanning tree. \end{pf}
It is illuminating to see how R\'emy's algorithm acts on plane trees by means of the bijection described above (see Figure~\ref{fig4}). Apart from adding new edges to existing nodes, we also observe an operation that ``cuts'' existing nodes. The trees $T^p_n$ and $T^b_{2n-1}$ are special cases of Galton--Watson trees (respective offspring distributions geometric and uniform on $\{0,2\}$) conditioned to have $n$ and $2n-1$ nodes, respectively. As noted by \citet{Janson2006b}, such conditioned trees cannot in general be grown by only adding edges. Hence, it is tempting to speculate whether there is a wider class of offspring distributions for which conditional Galton--Watson trees can be grown using only local operations on trees such as those in Figure~\ref{fig4}.
\begin{figure}
\caption{R\'emy's algorithm acting on plane trees by means of the bijection given in Figure~\protect\ref{fig3}. We leave it to the reader to find the operations in the binary tree as given in \textup{(a)} that correspond to the operations~\textup{(c)--(j)}.}
\label{fig4}
\end{figure}
Before proceeding with the proof of Theorem~\ref{thm2}, we need an auxiliary lemma used to transfer rates from our urn model to the distributions in Propositions~\ref{prop2} and~\ref{prop3}. Here and below $\Vert\cdot\Vert $ denotes the essential supremum norm.
\begin{lemma}\label{lem2} Let $\alpha\geq1$ and $\beta\geq1$. There is a constant $C=C_{\alpha,\beta}$, such that for any positive random variable $X$ and any real-valued random variable $\xi$,
\begin{eqnarray}\label{15} && d_{\mathrm{K}}\bigl(\mathscr{L}(X+\xi),\operatorname{GG}(\alpha,\beta) \bigr) \nonumber\\[-8pt]\\[-8pt]\nonumber &&\qquad \leq C \bigl (d_{\mathrm{K}} \bigl(\mathscr{L}(X),\operatorname{GG}(\alpha,\beta) \bigr) + \bigl\Vert\mathbb{E}\bigl(\xi^2 | X\bigr)\bigr\Vert^{1/2} \bigr). \end{eqnarray}
If $X$ and $\xi$ satisfy
\begin{equation} \label{16} \mathbb{P}\bigl[\vert\xi\vert\geq t | X \bigr] \leq c_1 e^{-c_2 t^2 / X} \end{equation}
for some constants $c_1>0$ and $c_2 > 1$, then
\begin{eqnarray}\label{17} && d_{\mathrm{K}}\bigl(\mathscr{L}(X+\xi),\operatorname{GG}(\alpha,\beta) \bigr) \nonumber\\[-8pt]\\[-8pt]\nonumber &&\qquad \leq C \biggl (d_{\mathrm{K}} \bigl(\mathscr{L}(X),\operatorname{GG}(\alpha,\beta) \bigr) + \frac{1+c_1+\log c_2}{\sqrt{c_2}} \biggr). \end{eqnarray}
\end{lemma}
\begin{pf} The proofs of~\eqref{15} and~\eqref{17} follow along the lines of the proof of Lemma~1 of \citet{Bolthausen1982a}. Once one observes that $\operatorname{GG}(\alpha,\beta)$ has bounded density, the modifications needed to prove~\eqref{15} are straightforward, and hence omitted. The modifications to prove~\eqref{17}, however, are more substantial, hence we give a complete proof for this case. Let $Z\sim \operatorname{GG}(\alpha,\beta)$, and let
\begin{eqnarray*} F(t) &=& \mathbb{P}[X\leq t], \qquad F^*(t) = \mathbb{P}[X+\xi\leq t], \\ G(t) &=& \mathbb{P} [Z\leq t], \qquad\delta= \sup_{t>0} \bigl\vert F(t)-G(t)\bigr \vert. \end{eqnarray*}
If $t>\varepsilon>0$, then
\begin{eqnarray*} F^*(t) &=& \mathbb{E}\bigl\{\mathbb{P}[\xi\leq t - X| X] \bigr\} \geq\mathbb{E}\bigl\{{\mathrm{I}} [X\leq t-\varepsilon]\mathbb{P}[\xi\leq t - X| X] \bigr\} \\ &=& F(t-\varepsilon) - \mathbb{E}\bigl\{{\mathrm{I}}[X\leq t-\varepsilon]\mathbb{P}[\xi>t-X| X] \bigr\}. \end{eqnarray*}
Let $t_0 = \log c_2$ and $\varepsilon= \frac{\log c_2}{\sqrt{c_2}}$, and observe that, since $c_2>1$, we have $t_0>\varepsilon>0$. Also note that one can find a constant $c_3$ such that $1-G(t) \leq c_3 e^{-t/2}$. Using~\eqref{16} and setting $M_{\alpha,\beta}$ the maximum of the density of $\operatorname{GG}(\alpha,\beta)$ (defined explicitly in Lemma~\ref{lem12} below),
\begin{eqnarray*} && \mathbb{E}\bigl\{{\mathrm{I}}[X\leq t-\varepsilon]\mathbb{P}[\xi>t-X| X] \bigr\} \\
&& \qquad\leq\mathbb{E}\bigl\{{\mathrm{I}}[X\leq t\wedge t_0-\varepsilon]\mathbb{P}[\xi>t\wedge t_0-X|X] \bigr\} + \mathbb{P}[X>t_0-\varepsilon] \\ &&\qquad\leq c_1\mathbb{E}\bigl\{{\mathrm{I}}[X\leq t\wedge t_0- \varepsilon]e^{-c_2(t\wedge t_0-X)^2/X} \bigr\} + \mathbb{P}[Z>t_0-\varepsilon]+\delta \\ &&\qquad\leq c_1 e^{-c_2\varepsilon^2/t_0} + \mathbb{P}[Z>t_0]+\delta+ \varepsilon M_{\alpha,\beta} \\ &&\qquad\leq c_1 e^{-\log c_2} + \delta+ \frac{c_3+M_{\alpha,\beta }\log c_2}{\sqrt{c_2}} \leq \delta+ \frac{c_1+c_3+M_{\alpha,\beta}\log c_2}{\sqrt{c_2}}. \end{eqnarray*}
Therefore,
\begin{eqnarray*} F^*(t) - G(t) & \geq & F(t-\varepsilon) - G(t-\varepsilon) - \varepsilon M_{\alpha,\beta} - \delta- \frac{c_1+c_3+M_{\alpha,\beta}\log c_2}{\sqrt{c_2}} \\ & \geq& - 2\delta- \frac{c_1+c_3+2M_{\alpha,\beta}\log c_2}{\sqrt{c_2}}. \end{eqnarray*}
On the other hand,
\[ F^*(t) \leq F(t+\varepsilon) + \mathbb{E}\bigl\{{\mathrm{I}}[ t+\varepsilon< X \leq t_0]\mathbb{P}[\xi
\leq t-X|X] \bigr\} + \mathbb{P}[X>t_0]. \]
Since
\begin{eqnarray*}
&& \mathbb{E}\bigl\{{\mathrm{I}}[t+\varepsilon< X \leq t_0]\mathbb{P}[\xi\leq t-X|X] \bigr\} \\ &&\qquad\leq c_1\mathbb{E}\bigl\{{\mathrm{I}}[t+\varepsilon< X \leq t_0]e^{-c_2(t-X)^2/X} \bigr\} \leq c_1 e^{-c_2\varepsilon^2/t_0} \leq\frac{c_1}{ c_2 } \end{eqnarray*}
and $\mathbb{P}[X > t_0] \leq\delta+ c_3 / \sqrt{c_2}$, by a similar reasoning as above,
\begin{eqnarray*} F^*(t) - G(t) &\leq& F(t+\varepsilon) + G(t-\varepsilon) + \varepsilon M_{\alpha,\beta} + \delta+ \frac{c_1+c_3}{\sqrt{ c_2}} \\ &\leq& 2\delta+ \frac{c_1 + c_3 + M_{\alpha ,\beta}\log c_2}{\sqrt{c_2}}. \end{eqnarray*}
Hence,
\begin{equation} \label{18} \bigl\vert F^*(t)-G(t)\bigr\vert\leq2\delta+ \frac{c_1 + c_3 + 2M_{\alpha,\beta }\log c_2}{\sqrt{c_2}}. \end{equation}
From this, one easily obtains~\eqref{17}. \end{pf}
\begin{pf*}{Proof of Theorem~\ref{thm2}} \textit{Case}~(i). This follows directly from Proposition~\ref{prop1} and~\eqref{1} of Theorem~\ref{thm1}.
\textit{Case} (ii). Let $W_n=\operatorname{sp}^{k}_{\mathrm {Node}}(T^b_{2n-1})/\nu_n$ with $\nu_n = \mu _{n-k-1}(1,2k)$, let $X_{n,k}$ be as in Proposition~\ref{prop2}. Applying the triangle inequality, we obtain
\begin{eqnarray}\label{19} &&d_{\mathrm{K}}\bigl(\mathscr{L}(W_n),\operatorname{GG}(2k,2) \bigr) \nonumber\\[-8pt]\\[-8pt]\nonumber &&\qquad\leqd_{\mathrm{K}}\bigl(\mathscr{L}(W_n),\mathscr{L}(X_{n,k}/ \nu_n ) \bigr) + d_{\mathrm{K}}\bigl(\mathscr{L}(X_{n,k}/\nu_n ),\operatorname{GG}(2k,2) \bigr). \end{eqnarray}
Since the total variation distance is an upper bound on the Kolmogorov distance, \eqref{11} yields that the first term in \eqref{19} is of order $\mathrm{O}(n^{-1})$. To bound the second term in~\eqref{19}, let $N^*_{n-k,k}$ and $Y_1,\dots,Y_k$ be as in Proposition~\ref {prop2}; set $X:= N^*_{n-k,k} / \nu_n$ and $\xi:= (Y_1+\cdots+Y_k) / \nu_n$. From~\eqref{10} and recalling that $ (\sum_{i=1}^k Y_k )^2\leq k\sum_{i=1}^k Y_i^2$, it is easy to see that
$\mathbb{E}(\xi^2|X) \leq6k/\nu_n$ almost surely. Applying~\eqref{15} from Lemma~\ref{lem2}, we hence obtain that
\[ d_{\mathrm{K}}\bigl(\mathscr{L}(X_{n,k}/\nu_n),\operatorname{GG}(2k,2)\bigr)\leq C \bigl( d_{\mathrm{K}}\bigl(\mathscr{L}\bigl(N^*_{n-k,k}/\nu_n\bigr), \operatorname{GG}(2k,2)\bigr) + \nu_n^{-1/2} \bigr). \]
Combining this with Theorem~\ref{thm1} and \eqref{19}, the claim follows.
\textit{Case} (iii). Let $N^*_{n-k,k}$ and $X_{n,k}$ be as in Proposition~\ref{prop3} and let again $\nu_n=\mu_{n-k-1}$. We may\vspace*{1pt} consider $2X_{n,k}/\mu_n$ in place of $2\operatorname{sp}^{k}_{\mathrm{Node}}(T^p_n)/\nu _n$, since by \eqref{2} of Theorem~\ref{thm1}, the constant shift $2k/\nu_n$ is of order $n^{-1/2}$, which, by Lemma~\ref{lem2}, translates into an error of order at most $n^{-1/2}$. Let $X:= N^*_{n-k,k} / \nu_n $ and $\xi:= (2X_n - N^*_{n-k,k})/\nu_n$ and note that $2X_n / \nu_n = X+\xi$. From Chernoff's inequality, it follows that~\eqref{16} holds with $c_1 = 2$ and $c_2 = \nu_n^2/4$. For $n$ large enough, $c_2>1$ [again using~\eqref{2}] and applying~\eqref {17} from Lemma~\ref{lem2} and~\eqref{1} from Theorem~\ref{thm1}, the claim follows. \end{pf*}
\section{Random walk: Proof of Theorem~\texorpdfstring{\protect\ref{thm3}}{1.11}} \label{sec2a}
That random walks and random trees are intimately connected has been observed in many places; see, for example, \citet{Aldous1991} and \citet{Pitman2006}. The specific bijections between binary trees and random walk, excursion, bridge and meander which we will make use of were sketched by \citet{Marchal2003} and see also the references therein. It is clear that for each such bijection R\'emy's algorithm can be translated to recursively create random walk, excursion, bridge and meander of arbitrary lengths.
\begin{figure}
\caption{Illustration of the bijection between a binary tree with $n$ leaves (on the left), and random walk excursions of length $2n$ (on the right).}
\label{fig5}
\end{figure}
\subsection*{Random walk excursion} The simplest bijection is that between a binary tree of size $2n-1$ and a (positive) random walk excursion of length $2n$, as illustrated in Figure~\ref{fig5}. Note first that the first and last step of the excursion must be $+1$ and $-1$, respectively, that is, $S^e_{2n}(1) = S^e_{2n}(2n-1) = 1$. To map the tree to the path from $1$ to $2n-1$, we do a left-to-right depth-first exploration of the tree (i.e., counterclockwise): starting from the root, each time an edge is visited the first time (out of a total of two times that each edge is visited), the excursion will go up by one if the edge is a left-edge and go down by one if the edge is a right-edge. By means of the Dyck path representation of the binary tree, we conclude that in this exploration process, the number of explored left edges (``opening brackets'') is always larger than the number of explored right edges (``closing brackets''), hence the random walk stays positive. Furthermore, since the number of left- and right-edges is equal, the final height is the same as the starting height. It is not hard to see that the height of a time point in the excursion corresponds to one plus the number of left-edges from the corresponding point in the binary tree up to the root.
Furthermore, the pairing between leaves and internal nodes in the (planted) binary tree induces a pairing between the time points in the random walk excursion (the pairing in Figure~\ref{fig2}, by means of the bijection in Figure~\ref{fig5}, results in the pairing in Figure~\ref{fig6}). Note that all time points can be paired except for the final time point $2n$ for which, however, we know the height.
\begin{figure}
\caption{Pairing up the points in the random walk excursion. Note that we pair up time point $2n-1$ with time point $0$, whereas time point $2n$ is left without a partner.}
\label{fig6}
\end{figure}
\begin{proposition}[(Height of an excursion at a random time)]\label{prop4} If $n\geq1$, $N^*_{n-1} \sim{\mathcal{P}}^{1}_{n-1}(0,1)$ and $K'\sim\mathrm{U}\{ 0,1,\dots,2n-1\}$ independent of $N^*_{n-1}$ and the excursion $(S^e_{2n}(u))_{u=0}^{2n}$, then
\[ S^e_{2n}\bigl(K'\bigr)\sim\operatorname{Bi} \bigl(N^*_{n-1},1/2\bigr). \]
\end{proposition}
\begin{pf} Mapping the pairing of leaves and internal nodes from the planted binary tree to the excursion, we have that the heights in each pair differ by exactly one because, by definition of the pairing, each leaf has one more left edge in its path up to the root as compared to the internal node it is paired with.
Let $J\sim\operatorname{Be}(1/2)$ independent of all else. Instead of choosing a random time point $K'$, we may as well choose with probability $1/2$ the time point corresponding to Leaf~1 ($J=0$), and choose with probability $1/2$ the time point paired with the time point given by Leaf~1 ($J=1$). Recall that the height of a time point corresponding to a leaf is just one plus the number of left-edges $M_{1,n}$ in the path to the root in the corresponding binary tree. From Lemma~\ref{lem1} with $k=1$, we have $M_{1,n}\sim\operatorname{Bi}(N^*_{n-1}-1,1/2)$. Let $X_n$ be the height of the excursion at the time point corresponding to the node chosen in the binary tree; we have $X_n = 1 + M_{1,n} - J$. Since $J$ is independent of the tree and since $1-J\sim\operatorname{Be}(1/2)$, we have $X_n \sim\operatorname{Bi}(N^*_{n-1},1/2)$, which proves the claim. \end{pf}
\subsection*{Random walk bridge} We now discuss the bijection between decorated binary trees and random walk bridges; see Figure~\ref{fig7} for an example. We first mark the path from Leaf~1 to the root. We call all the internal nodes along this path, including the root, the \emph{spine} (the trivial tree of size one has no internal node and, therefore, an empty spine). As before, a left edge represents ``$+1$'' and a right-edge represents ``$-1$''. The exploration starts at the root. Whenever a spine node is visited, explore first the child (and its subtree) that is not part of the spine, and then the child that is next in the spine. Also, if the right child of a spine node is being explored and if that child is not itself a spine node do the exploration \emph{clockwise}, until the exploration process is back to the spine. This makes each sub-tree to the left of the spine a positive excursion and each sub-tree to the right a negative excursion; cf. \citet{Pitman2006}, Exercise 7.4.14.
\begin{figure}
\caption{Illustration of the bijection between a decorated binary tree of size $2n+1$ with a spine and random walk bridge of length $2n$. Note that within sub-trees that grow to the left of the spine, the depth-first exploration is done counterclockwise, whereas within sub-trees that grow to the right it is done clockwise.}
\label{fig7}
\end{figure}
\begin{proposition}[(Occupation time of bridge)]\label{prop5} If $n\geq0$, then
\[ L^b_{2n} \sim{\mathcal{P}}^{1}_n(0,1). \]
\end{proposition}
\begin{pf} The proof is straightforward by observing that the number of visits to the origin $L^b_{2n}$ is exactly the number of nodes in the path from Leaf~1 to the root and then applying Proposition~\ref{prop1} with $k=1$ and $n$ replaced by $n+1$. \end{pf}
\subsection*{Random walk meander}
We use a well-known bijection between random walk bridges of length $2n$ and meanders of length $2n+1$; see Figure~\ref{fig8}. Start the meander with one positive step. Then, follow the absolute value of the bridge, except that the last step of every negative excursion is flipped. Alternatively, consider the random walk bridge difference sequence. Leave all the steps belonging to positive excursions untouched, and multiply all steps belonging to negative excursions by $-1$, except for the last step of each respective negative excursion (which must necessarily be a ``$+$1''). Now, start the meander with one positive step and then follow the new difference sequence.
\begin{figure}
\caption{Illustration of the bijection between a random walk bridge of length $2n$ (above) and a meander of length $2n+1$ (below).}
\label{fig8}
\end{figure}
\begin{proposition}[(Final height of meander)]\label{prop6} If $n\geq0$, $N^*_{n} \sim{\mathcal{P}}^{1}_n(0,1)$, $X_n\sim\operatorname{Bi} (N^*_{n}-1,1/2)$ and $Y_n\sim\operatorname{Bi}(N^*_{n},1/2)$, then
\[ S^m_{2n+1}(2n+1) \sim\mathscr{L}(2 X_n + 1),\qquad S^m_{2n+2}(2n+2) \sim\mathscr{L}(2 Y_n| Y_n>0). \]
\end{proposition}
\begin{pf} It is clear that every negative excursion in the random walk will increase the final height of the meander by two. Since the number of negative excursions equals the number of left-edges in the spine of the corresponding binary tree, the first identity follows directly from Lemma~\ref{lem1} for $k=1$. To obtain a meander of length $2n+2$, proceed as follows. First, consider a meander of length $2n+1$, let $2X_n+1$ be its final height, and then add one additional time step to the meander by means of an independent fair coin toss. The resulting process is a simple random walk, conditioned to be positive from time steps $1$ to $2n+1$. The height of this process at time $2n+2$ has distribution $2Y_n$, where we can take $Y_n = X_n+J$ and where $J\sim \operatorname{Be}(1/2)$ independent of $X_n$. However, the final height of this process may now be zero. Hence, conditioning on the path being positive results in a meander of length $2n+2$. This proves the second identity. \end{pf}
\begin{figure}
\caption{Bijection between a pair of decorated binary trees with a total size of $2n+2$ and an unconditional random walk of length $2n+1$. The meander part of the walk is constructed through a random walk bridge, which is plotted in dashed lines.}
\label{fig9}
\end{figure}
\subsection*{Unconditional random walk} In order to represent an unconditional random walk of length $2n+1$, we use two decorated binary trees, the first tree representing the bridge part of the random walk (i.e., the random walk until the last return to the origin) and the second tree representing the meander part (i.e., the random walk after the last return to the origin); see Figure~\ref {fig9}. Note that every random walk of odd length has a meander part. First, with equal probability, start either with the two trivial trees \extnode{1} and \extnode{{$\scriptstyle+$}} or with the two trivial trees \extnode{1} and \extnode{{$\scriptstyle-$}} [representing the random walk $S_1$ with $S_1(1)=1$, resp., $S_1(1) = -1$]. Then, perform R\'emys algorithm in exactly the same way as for a single tree. That is, at each time step, a random node is chosen uniformly among all nodes of the two trees and then an internal node as well as a new leaf are inserted. From these two trees, the random walk is constructed in a straight forward manner: the first tree represents the bridge part, whereas the second tree represents the meander part (if the initial second tree was \extnode{{$\scriptstyle-$}}, then the whole meander is first constructed as illustrated in Figure~\ref{fig8} and is then flipped to become negative).
\begin{proposition}[(Occupation time of random walk)]\label{prop7} If $n\geq0$, then
\[ L_{2n} \sim{\mathcal{P}}^{1}_n(1,1), \qquad L_{2n+1} \sim{\mathcal{P}}^{1}_n(1,1). \]
\end{proposition}
\begin{pf} Note that the number of visits to the origin is exactly the number of nodes in the path from Leaf $1$ (which is always in the first tree) to the root. Hence, we can use a similar urn embedding as for Proposition~\ref{prop1} with $k=1$, except that at the beginning the urn contains one black ball and one white ball (the black ball representing the leaf of the second tree).
This proves the second identity of the proposition. To obtain the first identity, take a random walk of length $2n+1$ and remove the last time step, obtaining a random walk of length $2n$. Since the number of visits to the origin cannot be changed in this way, the first identity follows. \end{pf}
\begin{remark} Proposition~\ref{prop5} is implicitly used in \citet{Pitman2006}, Exercise~7.4.14. The other propositions do not appear to have been stated explicitly in the literature. \end{remark}
\begin{pf*}{Proof of Theorem~\ref{thm3}} Cases (i) and (ii) are immediate from Theorem~\ref{thm1} in combination with Proposition~\ref{prop7} and Proposition~\ref{prop5}, respectively. Using Proposition~\ref{prop4}, case (iii) is proved in essentially the same way as case (iii) of Theorem~\ref{thm2}, also noting that the total variation error introduced by using $K$ instead of $K'$ is of order $\mathrm{O}(n^{-1})$. Using Proposition~\ref{prop6}, case (iv) for odd $n$ is also proved in essentially the same way as case (iii) of Theorem~\ref{thm2}.
In order to prove case (iv) for even $n$, note that the total variation distance between $\mathscr{L}(Y_n)$ and $\mathscr{L}(Y_n|Y_n>0)$ is $\mathbb{P} [Y_n=0]=\IE2^{-N_n^*}$. Let $Z\sim\operatorname{GG}(2,2)$; using Theorem~\ref{thm1},
\begin{eqnarray*} \IE2^{-N_n^*} & \leq&\mathbb{P}\bigl[N_n < \tfrac{1}{2} \log_2 n\bigr] + 2^{-{1/2}\log_2 n} \\ & \leq&\mathbb{P}\bigl[Z < \tfrac{1}{2}\mu^{-1}_n \log_2 n\bigr] + d_{\mathrm{K}}\bigl(\mathscr{L}(N_n/\mu _n),\mathscr{L}(Z) \bigr) + n^{-1/2} =\mathrm{O}\bigl(n^{-1/2} \bigr). \end{eqnarray*}
Now, estimating $d_{\mathrm{K}}(\mathscr{L}(2 Y_n), \operatorname{GG}(2,2) )$ again follows the proof of case (iii) of Theorem~\ref{thm2}. \end{pf*}
\section{Proof of urn Theorem~\texorpdfstring{\protect\ref{thm1}}{1.2}}\label{sec2}
In order to prove Theorem~\ref{thm1}, we need a few lemmas.
\begin{lemma}\label{lem3} Let $b\geq0$, $w>0$, $N_n=N_{n}(b,w)\sim{\mathcal {P}}^l_n(b,w)$ and let $n_{i}=n_i(b,w)=w+b+i+\lfloor i/l \rfloor$ be the total number of balls in the $ {\mathcal{P}}^l_n(b,w)$ urn after the $i$th draw. If $m\geq1$ is an integer and $D_{n,m}(b,w):=\prod_{i=0}^{m-1} (i+N_{n}(b,w))$, then
\begin{equation} \label{20} \mathbb{E} D_{n,m}(b,w) = \prod_{j=0}^{m-1} (w+j) \prod_{i=0}^{n-1}\bigl(1+m/n_{i}(b,w) \bigr) \end{equation}
and for some positive values $c:=c(b,w,l,m)$ and $C:=C(b,w,l,m)$ not depending on $n$ we have
\begin{equation} \label{21} cn^{ml/(l+1)}<\mathbb{E}\bigl[N_{n}(b,w)^m \bigr]<Cn^{ml/(l+1)}. \end{equation}
\end{lemma}
\begin{pf} Fix $b,w$ and write $D_{n,m}=D_{n,m}(b,w)$. We first prove~\eqref{20}. Conditioning on the contents of the urn after draw and replacement $n-1$, and noting that at each step, the number of white balls in the urn either stay the same or increase by exactly one, we have
\begin{eqnarray*}
\mathbb{E}\{D_{n,m}|N_{n-1}\} &=& \frac{N_{n-1}}{n_{n-1}} \frac{ D_{n-1,m}(N_{n-1}+m)}{N_{n-1}}+ \frac{n_{n-1}-N _{n-1}}{n_{n-1}}D_{n-1,m} \\ &=&(1+m/n_{n-1}) D_{n-1,m}, \end{eqnarray*}
which when iterated yields~\eqref{20}.
By the definition of $n_i$,
\[ i+w+b-1+i/l\leq n_{i}\leq i+w+b+i/l, \]
and now setting $x=l/(l+1)$ and $y=(w+b-1)l/(l+1)$, we find for some constants $c, C$ not depending on $n$ that
\begin{equation} \qquad c n^{mx}\leq c \frac{\Gamma(mx+y+x+n)}{\Gamma(y+x+n)} \leq\mathbb{E} D_{n,m} \leq C \frac{ \Gamma(mx+y+n)}{\Gamma(y+n)} \leq C n^{mx}.\label{22} \end{equation}
The upper bound follows from this and the easy fact that $\IEN _n^m\leq\mathbb{E} D_{n,m}$. The lower bound follows from~\eqref{22} and the following inequality which follows from Jensen's inequality $\IEN_n^m = \mathbb{E} D_{n,1}^m\geq(\mathbb{E} D_{n,1} )^m$. \end{pf}
Our next result implies that biasing the distribution ${\mathcal{P}}^l_n(b,w)$ against the $r$ rising factorial is the same as adding $r$ white balls to the urn before starting the process, and then removing $r$ white balls at the end. We will only use the lemma for $r=l+1$, but state and prove it for general $r$ because it is an interesting result in its own right.
\begin{lemma}\label{lem4} Let $N_{n}(b,w)$ and $D_{n,m}(b,w)$ be as in Lemma~\ref{lem3} and let $r\geq2$. If $N_n^{[r]}=N_n^{[r]}(b,w)$ is a random variable such that
\begin{equation} \label{23} \mathbb{P}\bigl[N_n^{[r]}=k\bigr]= \frac{ [\prod_{i=0}^{r-1}(k+i) ]\mathbb{P} [N_n(b,w)=k]}{\mathbb{E} D_{n,r}(b,w)}, \end{equation}
then
\begin{equation} N_n(b,w+r)\edN_n^{[r]}(b,w)+r. \label{24} \end{equation}
\end{lemma}
\begin{pf} Since $N_n(b,w+r)$ and $N_n^{[r]}(b,w)+r$ are bounded variables, the lemma follows by verifying their factorial moments are equal. With $n_i(b,w)$ as in Lemma~\ref{lem3}, for any $m\geq1$ we have
\begin{eqnarray*} \mathbb{E}\prod_{i=0}^{m-1} \bigl( N_n^{[r]}(b,w)+r+i\bigr)&=&\frac{\mathbb{E} D_{n,m+r}(b,w)}{\mathbb{E} D_{n,r}(b,w)} \\ &=& \prod _{j=0}^{m-1} (w+r+j) \prod _{i=1}^{n}\frac {n_{i-1}(b,w)+m+r}{n_{i-1}(b,w)+r} \\ &=&\mathbb{E} D_{n,m}(b,w+r) = \mathbb{E}\prod_{i=0}^{m-1} \bigl(i+N_n(b,w+r)\bigr); \end{eqnarray*}
the second and third equalities follow by~\eqref{20} and the definition of $n_i(b,w)$, and the last follows from the definition of $D_{n,m}(b,w)$. \end{pf}
\begin{lemma}\label{lem5} For $N_n(1,w)\sim{\mathcal{P}}^l_n(1,w)$ and $l\geq1$, there is a coupling of $N_n^{(l+1)}(1,w)$, a random variable having the $(l+1)$-power bias distribution of $N_n(1,w)$, with a variable $N_{n-l}(1,w+l+1) \sim{\mathcal{P}}^l_{n-l}(1,w+l+1)$ such that for some constant $C:=C(w,l)$,
\[ \mathbb{P}\bigl[\bigl\vertN_{n-l}(1,w+l+1)- N_n^{(l+1)}(1,w) \bigr\vert>2l+1 \bigr]\leq Cn^{-l/(l+1)}. \]
\end{lemma}
\begin{pf} Obviously, we can couple $N_n(1,w+l+1)\sim{\mathcal{P}}^l_n(1,w+l+1)$ with $N_{n-l}(1,w+l+1)$ so that
\[ \bigl\vertN_{n-l}(1,w+l+1)-N_n(1,w+l+1)\bigr \vert\leq l, \]
and then Lemma~\ref{lem4} implies that we may couple $N_{n}(1,w+l+1)$ with $N _n^{[l+1]}(1,w)$ [with distribution defined at~\eqref{23}] so that almost surely
\begin{eqnarray*} &&\bigl\vertN_{n-l}(1,w+l+1)-N_n^{[l+1]}(1,w) \bigr\vert \\ && \qquad\leq\bigl\vertN_{n-l}(1,w+l+1)-\bigl(N _n^{[l+1]}(1,w)+l+1\bigr)\bigr\vert+l+1 \\ &&\qquad=\bigl\vertN_{n-l}(1,w+l+1)-N_n(1,w+l+1) \bigr\vert+l+1\leq2l+1. \end{eqnarray*}
And we show
\begin{equation} d_{\mathrm{TV}}\bigl(\mathscr{L}\bigl(N_n^{[l+1]}(1,w) \bigr),\mathscr{L}\bigl( N_n^{(l+1)}(1,w) \bigr) \bigr)\leq C n^{-l/(l+1)}, \label{25} \end{equation}
where $d_{\mathrm{TV}}$ is the total variation distance, which for integer-valued variables $X$ and $Y$ can be defined in two ways:
\[ d_{\mathrm{TV}}\bigl(\mathscr{L}(X),\mathscr{L}(Y)\bigr)=\frac{1}2\sum _{z\in\mathbb{Z}} \bigl\vert\mathbb{P}[X=z]-\mathbb{P}[Y=z]\bigr\vert=\inf _{(X,Y)}\mathbb{P}[X\neq Y]; \]
here, the infimum is taken over all possible couplings of $X$ and $Y$. Due to the latter definition,~\eqref{25} will imply the lemma since
\begin{eqnarray*} &&\mathbb{P}\bigl[\bigl\vertN_{n-l}(1,w+l+1)-N_n^{(l+1)}(1,w) \bigr\vert>2l+1 \bigr] \\ &&\qquad=\mathbb{P}\bigl[\bigl\vertN_{n-l}(1,w+l+1)-N _n^{(l+1)}(1,w)\bigr\vert \\ &&\qquad >2l+1, N_n^{[l+1]}(1,w) \neqN_n^{(l+1)}(1,w) \bigr] \\ &&\qquad\leq\mathbb{P}\bigl[N_n^{[l+1]}(1,w)\neqN _n^{(l+1)}(1,w) \bigr]. \end{eqnarray*}
Let $\nu_m=\IEN_n^m(1,w)$ and note that we can write $\prod _{i=0}^{l}(x+i)=\sum_{i=0}^{l+1} a_i x^i$ for nonnegative coefficients $a_i$ with $a_{l+1}=1$ (these coefficients are the unsigned Stirling numbers). Also note that for nonnegative integers $k$ and $0\leq i\leq l+1$, we have $k^i\leq k^{l+1}$, and hence $\nu_i\leq\nu_{l+1}$. Thus,
\begin{eqnarray*} &&2d_{\mathrm{TV}}\bigl(\mathscr{L}\bigl(N_n^{[l+1]}(1,w) \bigr),\mathscr{L} \bigl(N_n^{(l+1)}(1,w) \bigr) \bigr) \\ &&\qquad= \sum_{k\geq0} \bigl\vert\mathbb{P}\bigl[ N_n^{[l+1]}(1,w)=k\bigr]-\mathbb{P}[N_n^{(l+1)}(1,w)=k) \bigr\vert \\ &&\qquad=\sum_{k} \biggl\vert\frac{\prod_{i=0}^{l}(k+i)}{\mathbb{E} D_{n,l+1}(1,w)} - \frac{k^{l+1}}{\nu_{l+1}}\biggr\vert\mathbb{P}\bigl[N_n(1,w)=k\bigr] \\ &&\qquad=\sum_{k} \Biggl\vert \Biggl(k^{l+1}+\sum_{i=0}^{l} a_i k^i\Biggr)\nu_{l+1}-k^{l+1} \Biggl(\nu_{l+1}+\sum_{i=0}^{l} a_i \nu_i\Biggr)\Biggr\vert\frac{\mathbb{P} [N_n(1,w)=k]}{\nu_{l+1}\mathbb{E} D_{n,l+1}(1,w)} \\ && \qquad\leq C\nu_{l}/\mathbb{E} D_{n,l+1}(1,w) \leq Cn^{-l/(1+l)}, \end{eqnarray*}
where the last line follows from~\eqref{21} of Lemma~\ref{lem3}. This proves the lemma. \end{pf}
Below let ${\mathcal{P}}_n(b,w)$ be the distribution of the number of white balls in the classical P\'olya urn started with $b$ black balls and $w$ white balls after $n$ draws. Recall that in the classical P\'olya urn balls are drawn and returned to the urn along with an additional ball of the same color [the notation is to suggest ${\mathcal{P}}^\infty_n(b,w)={\mathcal {P}}_n(b,w)$].
\begin{lemma}\label{lem6} There is a coupling $(Q_{w}(n), n V_w)_{n\geq1}$ with $Q_{w}(n) \sim
{\mathcal{P}}_n(1,w)$ and $V_w\sim\mathrm{B}(w,1)$ such that $|Q_{w}(n)-n V_w|\leq w+1$ for all $n$ almost surely. \end{lemma}
\begin{pf} Using \citet{Feller1968}, equation~(2.4), page~121, for $w\leq t \leq w+n$ we obtain
\begin{equation} \label{26} \mathbb{P}\bigl[Q_{w}(n)\leq t\bigr]=\prod _{i=0}^{w-1} \frac{t-i}{n+w-i}. \end{equation}
For $U_0, U_1, \ldots, U_{w-1}$ i.i.d. uniform $(0,1)$ variables, we may set
\[ Q_w(n)=\max_{i=0,1,\ldots, w-1} \bigl(i+\bigl \lceil(n+w-i)U_i\bigr\rceil\bigr), \]
since it is not difficult to verify that this gives the same cumulative distribution function as in~\eqref{26}. By a well-known representation of the beta distribution, we can take $V_w=\max (U_{0},\ldots, U_{w-1})$, and with this coupling the claim follows. \end{pf}
\begin{lemma}\label{lem7} If\/ $N_{n}(0,w+1) \sim{\mathcal{P}}^l_{n}(0,w+1)$ then \[ {\mathcal{P}}_n^l(1,w) = {\mathcal{P}} _{N_{n}(0,w+1)-w-1}(1,w). \] \end{lemma}
\begin{pf} Consider an urn with $1$ black ball and $w$ white balls. Balls are drawn from the urn and replaced as follows. After the $m$th ball is drawn, it is replaced in the urn along with another ball of the same color plus, if $m$ is divisible by $l$, an additional green ball. If $H$ is the number of times a nongreen ball is drawn in $n$ draws, the number of white balls in the urn after $n$ draws is distributed as ${\mathcal{P}}_H(1,w)$. The lemma follows after noting $H+w+1$ is distributed as ${\mathcal{P}} ^l_{n}(0,w+1)$ [which by definition is the distribution of $N _{n}(0,w+1)$] and the number of white balls in the urn after $n$ draws has distribution ${\mathcal{P}}^l_{n}(1,w)$. \end{pf}
\begin{pf*}{Proof of Theorem~\ref{thm1}} The asymptotic $\IEN_n^k \asymp n^{kl/(l+1)}$ is~\eqref{21} of Lemma~\ref{lem3}. We now show that
\[ \lim_{n\to\infty}\frac{\IEN_n^{l+1}}{n^l} = w \biggl(\frac {l+1}{l} \biggr)^{l}. \]
The asymptotic $\IEN_n^k \asymp n^{kl/(l+1)}$ implies that
\[ \frac{\IEN_n^{l+1}}{n^l}=\frac{\mathbb{E}\prod_{i=0}^{l} (i+N _n)}{n^{l}} + \mathrm{o}(1). \]
The numerator in the fraction on the right-hand side of the equality can be written using~\eqref{20} from Lemma~\ref{lem3} with $b=1, w=w$ and $m=l+1$ as
\begin{eqnarray*} \mathbb{E}\prod_{i=0}^{l} (i+ N_n) &=& \frac{\Gamma(w+l+1)}{\Gamma(w)}\prod_{i=0}^{n-1+\lfloor{ \vfrac {n-1}{l}}\rfloor} \frac{ w+1+i+l+1}{w+1+i} \\ &&{}\times \prod_{k=1}^{\lfloor { \vfrac{n-1}{l}}\rfloor} \frac{w+1+k l + k-1}{w+1+k l + k+l}, \end{eqnarray*}
and simplifying, especially noting the telescoping product in the final part of the term (which critically depends on having taken $m=l+1$), we have
\begin{eqnarray*} \mathbb{E}\prod_{i=0}^{l} (i+ N_n) &=& \frac{\Gamma(w+l+1)}{\Gamma(w)} \frac{\Gamma(w+2+l+n+\lfloor { \vfrac{n-1}{l}}\rfloor)\Gamma(w+1)}{\Gamma(w+l+2)\Gamma (w+1+n+\lfloor{ \vfrac{n-1}{l}}\rfloor)} \\ &&{}\times \frac{w+1+l}{w+l+1+\lfloor{ \vfrac{n-1}{l}}\rfloor(l+1)} \\ &=& w \frac{\Gamma(w+1+l+n+\lfloor{ \vfrac{n-1}{l}}\rfloor)}{\Gamma (w+1+n+\lfloor{ \vfrac{n-1}{l}}\rfloor)} \\ &&{}\times \frac{w+1+l+n+\lfloor{ \vfrac{n-1}{l}}\rfloor}{w+l+1+\lfloor{ \vfrac{n-1}{l}}\rfloor(l+1)}. \end{eqnarray*}
The asymptotic for $\IEN_n^{l+1}$ now follows by taking the limit as $n\to\infty$, using the well-known fact that, for $a>0$, $\lim_{x\to \infty} \frac{\Gamma(x+a)}{\Gamma(x) x^a}=1$ with $x=w+1+n+\lfloor{ \frac{n-1}{l}}\rfloor$.
The claimed asymptotic for $\mu_n$ follows directly from that of $\mathbb{E} N_n^{l+1}$, and with the order of the scaling $\mu_n$ in hand, the lower bound of Theorem~\ref{thm1} follows from \citet{Pekoz2013}, Lemma~4.1, which says that for a sequence of scaled integer valued random variables $(a_n N_n)$, if $a_n\to0$ and $\nu$ is a distribution with density bounded away from zero on some interval, then there is a positive constant $c$ such that $d_{\mathrm{K}}(\mathscr{L}(a_n N_n), \nu )\geq c a_n$.
To prove the upper bound we will invoke Theorem~\ref{thm4} and so we want to closely couple variables having marginal distributions equal to those of $N_n/\mu_n$ and $N^*=V_w N_n^{(l+1)}/\mu_n$. Lemma~\ref{lem6} implies there is a coupling of variables $(Q_w(n))_{n\geq1}$ with corresponding marginal distributions $({\mathcal {P}}_n(1,w))_{n\geq1}$ satisfying
\[ \bigl\vert V_w N_n^{(l+1)}- Q_w\bigl(N_n^{(l+1)}\bigr)\bigr\vert\leq w+1\qquad\mbox{almost surely.} \]
Further, by Lemma~\ref{lem5} we can construct a variable $N _{n-l}(1,w+l+1) \sim{\mathcal{P}}^l_{n-l}(1,w+l+1)$ such that
\[ \mathbb{P}\bigl[\bigl\vert Q_w\bigl(N_{n-l}(1,w+l+1) \bigr)- Q_w\bigl(N_n^{(l+1)}\bigr)\bigr \vert>2l+1 \bigr] \leq Cn^{-l/(l+1)}; \]
here we used that $\vert Q_w(s)-Q_w(t)\vert \leq\vert s-t\vert $. Recalling that $P^l_{n-l}(1,w+l+1) = P^l_{n}(0,w+1)$, Lemma~\ref{lem7} says that we can set $N_n=Q_w(N_{n-l}(1,w+l+1)-w-1)$ and it is immediate that
\begin{eqnarray} &&\bigl\vert Q_w\bigl(N_{n-l}(1,w+l+1)-w-1 \bigr)-Q_w\bigl(N_{n-l}(1,w+l+1)\bigr)\bigr\vert \leq w+1\nonumber \\ \eqntext{\mbox{almost surely.}} \end{eqnarray}
Thus, if we set $b=(2w+2l+3)/\mu_n$ then using the couplings above we find
\[ \mathbb{P}\bigl[ \bigl\vert{N_n}/{\mu_n}-{V_w N_n^{(l+1)}}/{\mu_n}\bigr\vert> b \bigr] \leq C \mu_n^{-1} \leq Cn^{-l/(l+1)}, \]
where the last inequality follows from~\eqref{21} of Lemma~\ref{lem3} which also implies $b \leq Cn^{-l/(l+1)}$. Using these couplings and the value of $b$ in Theorem~\ref{thm4} completes the proof. \end{pf*}
\section{Stein's method and proof of Theorem~\texorpdfstring{\protect\ref{thm4}}{1.16}}\label{sec3}
We first provide a general framework to develop Stein's method for $\log$-concave densities. The generalized gamma is a special case of this class. We use the density approach which is due to Charles Stein [see \citet{Reinert2005}]. This approach has already been discussed in other places in greater generality; see, for example, \citet{Chatterjee2011a}, \citet{Chen2011} and \citet {Dobler2012}. However, it seems to have gone unnoticed, at least explicitly, that the approach can be developed much more directly for $\log$-concave densities.
\subsection{Density approach for $\log$-concave distributions}\label{sec4}
Let $B$ be a function on the interval $(a,b)$ where $-\infty\leq a < b \leq \infty$. Assume also $B$ is absolutely continuous on $(a,b)$, $C_B = \int_a^b e^{-B(z)}\,dz <\infty$ and $B(a):=\lim_{x\to a^+} B(x)$ and $B(b):=\lim_{x\to b^-} B(x)$ exist as values in $\mathbb{R}\cup\{\infty\}$ and we use these to extend the domain of $B$ to $[a,b]$. Assume that $B$ has a left-continuous derivative on $(a,b)$, denoted by $B'$. From $B$, we can construct a distribution $P_B$ with probability density function
\[ \varphi_B(x) = C_B e^{-B(x)}, \qquad a<x<b \qquad\mbox{where }C_B^{-1} = \int_{a}^{b} e^{-B(z)}\,dz. \]
Let $L^1(P_B)$ be the set of measurable functions $h$ on $(a,b)$ such that \[ \int_a^b \bigl\vert h(x)\bigr\vert e^{-B(x)}\,dx < \infty.\] The distribution $P_B$ is $\log $-concave if and only if $B$ is convex. However, before dealing with this special case, we state a few more general results.
\begin{proposition}\label{prop8} If $Z\sim P_B$, we have
\[ \mathbb{E}\bigl\{f'(Z) - B'(Z)f(Z) \bigr\} = 0 \]
for all functions $f$ for which the expectations exists and for which
\[ \lim_{x\to a^+}f(x)e^{-B(x)}=\lim_{x\to b^-}f(x)e^{-B(x)}=0. \]
\end{proposition}
\begin{pf}Integration by parts. We omit the straightforward details. \end{pf}
Now, for $h\in L^1(P_B)$ and $Z\sim P_B$, let
\[ \tilde h(x) = h(x)-\mathbb{E} h(Z) \]
and, for $x\in(a,b)$,
\begin{equation} \label{27} f_h(x) = e^{B(x)} \int_a^x \tilde h(z)e^{-B(z)}\,dz = - e^{B(x)} \int_x^b \tilde h(z)e^{-B(z)}\,dz. \end{equation}
The key fact is that $f_h$ satisfies the differential (Stein) equation
\begin{equation} \label{28} f_h'(x) - B'(x)f_h(x) = \tilde h(x),\qquad x\in(a,b). \end{equation}
Define the Mills's-type ratios
\begin{equation} \label{29} \kappa_a(x) = e^{B(x)}\int _a^x e^{-B(z)}\,dz, \qquad \kappa_b(x) = e^{B(x)}\int_x^b e^{-B(z)}\,dz. \end{equation}
From~\eqref{27} and~\eqref{28}, we can easily deduce the following nonuniform bounds.
\begin{lemma}\label{lem8} If $h\in L^1(P_B)$ is bounded, then for all $x\in(a,b)$,
\begin{eqnarray} \bigl\vert f_h(x)\bigr\vert&\leq&\Vert\tilde h\Vert \bigl(\kappa_a(x)\wedge\kappa_b(x) \bigr), \label{30} \\ \bigl\vert f'_h(x)\bigr\vert&\leq&\Vert\tilde h\Vert\bigl\{1+ \bigl\vert B'(x)\bigr\vert\bigl( \kappa_a(x)\wedge\kappa_b(x) \bigr) \bigr\}. \label{31} \end{eqnarray}
\end{lemma}
In the case of convex functions, we can easily adapt the proof of \citet{Stein1986} to obtain the following uniform bounds.
\begin{lemma}\label{lem9} If $B$ is convex on $(a,b)$ with unique minimum $x_0\in[a,b]$, then for any $h\in L^1(P_B)$,
\begin{equation} \Vert f_h\Vert\leq\Vert\tilde h\Vert\frac{e^{B(x_0)}}{C_B}, \qquad\bigl\Vert B'f_h\bigr\Vert\leq\Vert \tilde h\Vert, \qquad\bigl\Vert f'_h\bigr\Vert \leq2\Vert\tilde h\Vert. \label{32} \end{equation}
\end{lemma}
\begin{pf} By convexity, we clearly have
\begin{equation} \label{33} x_0\leq x \leq z \leq b\quad\Longrightarrow\quad B(x) \leq B(z)\quad\mbox{and}\quad B'(x)\leq B'(z). \end{equation}
This implies that for $x>x_0$
\[ \int_x^b e^{-B(z)} \,dz \leq\int _x^b\frac{B'(z)}{B'(x)}e^{-B(z)}\,dz = \frac{e^{-B(x)}-e^{-B(b)}}{B'(x)} \leq\frac{e^{-B(x)}}{B'(x)}, \]
where in the last bound we use~\eqref{33} which implies $B'(x)>0$. So
\begin{equation} \label{34} B'(x)\kappa_b(x) \leq1. \end{equation}
Now, from this we have for $x>x_0$
\[ \kappa_b'(x) = -1+B'(x) \kappa_b(x) \leq0. \]
Similarly, we have
\begin{equation} \label{35} a\leq z \leq x \leq x_0\quad\Longrightarrow\quad B(z) \geq B(x)\quad\mbox{and}\quad\bigl\vert B'(z)\bigr\vert\geq\bigl\vert B'(x)\bigr\vert. \end{equation}
So, using~\eqref{35}, for $x<x_0$,
\[ \int_a^x e^{-B(z)} \,dz \leq\int _a^x\frac{\vert B'(z)\vert }{\vert B'(x)\vert }e^{-B(z)}\,dz = \frac{e^{-B(x)}-e^{-B(a)}}{\vert B'(x)\vert } \leq\frac {e^{-B(x)}}{\vert B'(x)\vert }, \]
thus
\begin{equation} \label{36} \bigl\vert B'(x)\bigr\vert \kappa_a(x) \leq1, \end{equation}
and so for $x<x_0$
\[ \kappa_a'(x) = 1+B'(x) \kappa_a(x) \geq0. \]
From~\eqref{30}, we obtain
\[ \Vert f\Vert\leq\Vert\tilde h\Vert\sup_x\cases{ \kappa_a(x), &\quad if $x< x_0$, \cr \kappa_b(x), &\quad if $x\geq x_0$.} \]
Hence, having an increasing bound on $x<x_0$ and a decreasing bound on $x>x_0$, implies that there is a maximum at $x_0$ and
\[ \Vert f\Vert\leq\Vert\tilde h\Vert\bigl(\kappa_a(x_0) \vee\kappa_b(x_0) \bigr). \]
The first bound of~\eqref{32} now follows from the fact that $\kappa _a(x_0)\vee \kappa_b(x_0)\leq\kappa_a(x_0) + \kappa_b(x_0)$. The second bound of~\eqref{32} follows from \eqref{30} in combination with~\eqref{34} and~\eqref{36}. Using~\eqref{31}, the third bound of~\eqref{32} follows in the same way. \end{pf}
\begin{remark} Lemma~\ref{lem9} applies to the standard normal distribution in which case $B(x)=x^2/2$, $x_0=0$, and $C_B=(2\pi)^{-1/2}$ and~\eqref{32} implies
\[ \Vert f_h\Vert\leq\Vert\tilde h\Vert\sqrt{2\pi}, \qquad \bigl\Vert f'_h\bigr\Vert\leq2\Vert\tilde h \Vert. \]
The best-known bounds are given in \citet{Chen2011}, Lemma~2.4, which improve the first bound by a factor of $2$ and match the second. In the special case of the form $h(\cdot)={\mathrm{I}}[\cdot\leq t]$, \citet{Chen2011}, Lemma~2.3, matches the bound of Lemma~\ref {lem9} of $\vert xf_h(x)\vert \leq\Vert\tilde h\Vert $. \end{remark}
Though not used below explicitly, we record the following theorem summarizing the utility of the lemmas above.
\begin{theorem} Let $B$ be convex on $(a,b)$ with unique minimum $x_0$, $Z\sim P_B$, and $W$ be a random variable on $(a,b)$. If $\mathcal{F}$ is the set of functions on $(a,b)$ such that for $f\in\mathcal{F}$
\[ \Vert f\Vert\leq\frac{e^{B(x_0)}}{C_B}, \qquad\bigl\Vert B'f \bigr\Vert\leq1, \qquad\bigl\Vert f'\bigr\Vert\leq2, \]
then
\[ \sup_{t\in(a,b)} \bigl\vert\mathbb{P}[Z\leq t]-\mathbb{P}[W\leq t]\bigr\vert \leq\sup_{f\in \mathcal{F}} \bigl\vert\mathbb{E}\bigl\{f'(W)-B'(W)f(W) \bigr\} \bigr\vert. \]
\end{theorem}
\begin{pf} For $t\in(a,b)$, if $h_t(x)={\mathrm{I}}[x\leq t]$, then taking the expectation in~\eqref{28} implies that
\begin{equation} \mathbb{P}[W\leq t]-\mathbb{P}[Z\leq t]= \mathbb{E}\bigl\{f_t'(W)-B'(W)f_t(W) \bigr\}, \label{37} \end{equation}
where $f_t$ satisfies~\eqref{28} with $h=h_t$. Taking the absolute value and the supremum over $t\in(a,b)$ on both sides of~\eqref{37}, we find
\[ \sup_{t\in(a,b)} \bigl\vert\mathbb{P}[W\leq t]-\mathbb{P}[Z\leq t]\bigr\vert = \sup_{t\in (a,b)} \bigl\vert\mathbb{E}\bigl\{f'_t(W)-B'(W)f_t(W) \bigr\} \bigr\vert. \]
The result follows since $h_t(x)\in[0,1]$ implies $\Vert\tilde h\Vert \leq1$, and so by Lemma~\ref{lem9}, $f_t\in\mathcal{F}$ for all $t\in(a,b)$. \end{pf}
Finally, we will need the following two lemmas to develop Stein's method. The proofs are standard, and can be easily adopted from the normal case; see, for example, \citet{Chen2005} and Rai{\v{c}} (\citeyear{Raic2003}).
\begin{lemma}[(Smoothing inequality)]\label{lem10} Let $B$ be convex on $(a,b)$ with unique minimum $x_0$ and let $Z\sim P_B$. Then, for any random variable $W$ taking values in $(a,b)$ and for any $\varepsilon>0$, we have
\[ d_{\mathrm{K}}\bigl(\mathscr{L}(W),\mathscr{L}(Z) \bigr) \leq\sup_{a<s<b}\bigl\vert \mathbb{E} h_{s,\varepsilon }(W)-\mathbb{E} h_{s,\varepsilon}(Z)\bigr\vert+ C_Be^{-B(x_0)}\varepsilon, \]
where
\begin{equation} \label{haha} h_{s,\varepsilon}(x) = \frac{1}{\varepsilon} \int _0^\varepsilon{\mathrm{I}}[ x\leq s+u]\,du. \end{equation}
\end{lemma}
\begin{lemma}[(Bootstrap concentration inequality)]\label{lem11} Let $B$ be convex on $(a,b)$ with unique minimum $x_0$ and let $Z\sim P_B$. Then, for any random variable $W$ taking values in $(a,b)$, for any $a<x<b$, and for any $\varepsilon>0$, we have
\[ \mathbb{P}[s\leq W\leq s+\varepsilon] \leq C_Be^{-B(x_0)}\varepsilon+ 2d_{\mathrm{K}}\bigl( \mathscr{L}(W),\mathscr{L}(Z) \bigr). \]
\end{lemma}
\subsection{Application to the generalized gamma distribution}\label{sec5}
We use the general results of Section~\ref{sec4} to prove the following more explicit statement of Theorem~\ref{thm4} for the generalized gamma distribution.
\begin{theorem}\label{thm5} Let\vspace*{1pt} $Z\sim\operatorname{GG}(\alpha,\beta)$ for some $\alpha\geq1, \beta\geq1$ and let $W$ be a nonnegative random variable with $\mathbb{E} W^\beta= \alpha/\beta$. Let $W^*$ have the $(\alpha,\beta)$-generalized equilibrium transformation of Definition~\ref{def1}. If $\beta=1$ or $\beta\geq2$, then for all $0<b\leq1$,
\begin{eqnarray*} && d_{\mathrm{K}}\bigl(\mathscr{L}(W), \mathscr{L}(Z) \bigr) \\ &&\qquad \leq b \bigl[10 M_{\alpha,\beta}+2 \beta( \beta-1) \bigl(1+2^{\beta-2} \bigl(\mathbb{E} W^{\beta-1}+b^{\beta-1} \bigr) \bigr)M_{\alpha,\beta}'+4 \beta\mathbb{E} W^{\beta-1} \bigr] \\ &&\quad\qquad{}+4 \bigl(2+(\beta+\alpha-1)M_{\alpha,\beta}' \bigr)\mathbb{P}\bigl[ \bigl\vert W-W^*\bigr\vert>b\bigr], \end{eqnarray*}
where here and below
\begin{eqnarray*} M_{\alpha,\beta}&:=&\alpha^{1-1/\beta} \beta^{1/\beta} e^{-4/9+{1}/(6\sklvfrac{\alpha-1}{\beta}+9/4)} \biggl(2{ \frac{\alpha-1}{\beta}}+1 \biggr)^{-1/2} \\ &\leq& {e^{1/e} \alpha ^{1-1/\beta}} { \biggl(2{ \frac{\alpha-1}{\beta}}+1 \biggr)^{-1/2}}, \\ M'_{\alpha,\beta}&:=& \sqrt{2\pi} e^{-{1}/(6\sklvfrac{\alpha -1}{\beta}+9/4)}{ \biggl({ \frac{\alpha-1}{\beta}}+1/2 \biggr)^{1/2} \biggl({ \frac{\alpha-1}{\beta}}+1 \biggr)^{1/\beta}} {\alpha^{-1} } \\ & \leq&\sqrt{2\pi} { \biggl({ \frac{\alpha-1}{\beta}}+1/2 \biggr)^{1/2} \biggl({ \frac{\alpha-1}{\beta}}+1 \biggr)^{1/\beta}} {\alpha^{-1}}. \end{eqnarray*}
If $1<\beta<2$, then for all $0<b\leq1$,
\begin{eqnarray*} && d_{\mathrm{K}}\bigl(\mathscr{L}(W), \mathscr{L}(Z) \bigr) \\ &&\qquad \leq b \bigl(10 M_{\alpha,\beta}+4 \beta\mathbb{E} W^{\beta-1} \bigr)+2 \beta b^{\beta-1}M_{\alpha,\beta}' \\ &&\quad\qquad{} +4 \bigl(2+(\beta+ \alpha-1)M_{\alpha,\beta}' \bigr)\mathbb{P}\bigl[\bigl\vert W-W^*\bigr \vert>b\bigr]. \end{eqnarray*}
\end{theorem}
\begin{remark} For a given $\alpha$ and $\beta$, the constants in the theorem may be sharpened. For example, the case $\alpha=\beta=1$ of the theorem is the exponential approximation result~(2.5) of Theorem~2.1 of \citet{Pekoz2011}, but here with larger constants. These larger constants come from three sources: first, below we bound some maximums of nonnegative numbers by sums for the sake of simple formulas (only if all but one of the terms in the maximum is positive is there any hope of optimality in the constants). Second, $M_{\alpha, \beta}$ and $M_{\alpha, \beta}'$ arise from bounds on the generalized gamma density, achieved by using both sides of the inequalities in Theorems~\ref{thm6} and~\ref{thm7} below. These inequalities are not optimal at the same value for each side, so some precision could be gained by using the appropriate exact bounds on the density which in principle are recoverable from the work below, but not particularly informative. Finally, in special cases more information about the Stein solution may be obtained. For example, in \citet{Pekoz2011} the term $\vert g(W)-g(W^*)\vert $ that appears in the proof of Theorem~\ref{thm5} is there bounded by~$1$, whereas following Lemma~\ref{lem16}, our general bound specializes to $2\Vert g\Vert \leq4.3$. \end{remark}
In the notation of Section~\ref{sec4}, for the generalized gamma distribution we have $\varphi_{\alpha,\beta}(x) = C e^{-B(x)}$, $x>0$ with $a=0$ and $b=\infty$, and
\[ B(x) = x^\beta-(\alpha-1)\log x,\qquad C=\frac{\beta}{\Gamma (\alpha/\beta)}. \]
If $\alpha\geq1$ and $\beta\geq1$, then $B$ has nonnegative second derivative and is thus convex. Since
\[ B'(x) = \beta x^{\beta-1} -\frac{(\alpha-1)}{x}, \]
$B$ has a unique minimum at $x_0= (\frac{\alpha-1}{\beta} )^{1/\beta}$. Hence,
\[ B(x_0) = \psi\biggl(\frac{\alpha-1}{\beta} \biggr)\qquad\mbox{with } \psi(x) = x - x\log(x), \psi(0)=0, \]
and
\begin{equation} C e^{-B(x_0)} = C e^{-\psi((\alpha-1)/\beta)}. \label{38} \end{equation}
In order to apply Lemmas~\ref{lem10} and~\ref{lem11}, we need to bound~\eqref{38}, for which we use the following two results about the gamma function.
\begin{theorem}[{[\citet{Batir2008}, Corollary 1.2]}]\label{thm6} For all $x\geq0$,
\[ \sqrt{2} e^{4/9} \leq\frac{\Gamma(x+1)}{x^x e^{-x-{1}/{(6x+9/4)}}\sqrt {x+1/2}} \leq\sqrt{2\pi}. \]
\end{theorem}
\begin{theorem}[{[\citet{Wendel1948}, (7)]}]\label{thm7} If $x>0$ and $0\leq s \leq1$, then
\begin{eqnarray*} &&\biggl(\frac{x}{x+s} \biggr)^{1-s}\leq\frac{\Gamma(x+s)}{x^s\Gamma (x)}\leq1. \end{eqnarray*}
\end{theorem}
\begin{lemma}\label{lem12} If $C$, $B$, and $x_0$ are as above for the generalized gamma distribution and $\alpha\geq1, \beta\geq1$, then $C e^{-B(x_0)}\leq M_{\alpha,\beta}$ \end{lemma}
\begin{pf} Using Theorem~\ref{thm6} with $x=(\alpha-1)/\beta$ in the inequality below implies
\begin{eqnarray}\label{39} \qquad e^{-B(x_0)} &=& \biggl({ \frac{\alpha-1}{\beta}} \biggr)^{(\alpha -1)/\beta }e^{-\vfrac {\alpha-1}{\beta}} \nonumber\\[-8pt]\\[-8pt]\nonumber &\leq&\Gamma\biggl({ \frac{\alpha-1}{\beta}} +1 \biggr ){e^{-4/9+{1}/{(6\sklvfrac{\alpha-1}{\beta}+9/4)}}} \biggl(2{ \frac{\alpha-1}{\beta}}+1 \biggr)^{-1/2}. \end{eqnarray}
Since $C=\beta/\Gamma(\alpha/\beta)$, Theorem~\ref{thm7} with $x=\alpha/\beta$ and $s=1-1/\beta$ yields
\[ C \Gamma\biggl({ \frac{\alpha-1}{\beta}} +1 \biggr)\leq\alpha ^{1-1/\beta } \beta^{1/\beta}, \]
and combining this with~\eqref{39}, the lemma follows. \end{pf}
We can also now prove the following lemma which is used in applying Lemma~\ref{lem9}.
\begin{lemma}\label{lem13} If $B$ and $x_0$ are as above for the generalized gamma distribution and $\beta\geq1, \alpha\geq1$, then $e^{B(x_0)}{\Gamma(\alpha/\beta)}/{\beta}\leq M'_{\alpha,\beta}$. \end{lemma}
\begin{pf} Using Theorem~\ref{thm6} with $x=(\alpha-1)/\beta$ in the following inequality, we find
\begin{eqnarray}\label{40} \qquad e^{B(x_0)}&=& \biggl({ \frac{\alpha-1}{\beta}} \biggr)^{-(\alpha -1)/\beta }e^{\vfrac{\alpha-1}{\beta}} \nonumber\\[-8pt]\\[-8pt]\nonumber &\leq& \sqrt{2\pi} { e^{-{1}/{(6\sklvfrac{\alpha-1}{\beta }+9/4)}} \biggl({ \frac{\alpha-1}{\beta}}+1/2 \biggr)^{1/2} } \Gamma\biggl({ \frac{\alpha-1}{\beta}}+1 \biggr)^{-1}. \end{eqnarray}
Now, Theorem~\ref{thm7} with $x=\alpha/\beta$ and $s=1-1/\beta$ yields
\begin{eqnarray*} &&\frac{\Gamma(\alpha/\beta)}{\Gamma(\sklvfrac{\alpha-1}{\beta }+1 )}\leq\frac{r}{\alpha} \biggl(\frac{\alpha-1}{\beta }+1 \biggr)^{1/\beta}, \end{eqnarray*}
and combining this with~\eqref{40}, the lemma follows. \end{pf}
Before proving Theorem~\ref{thm5}, we collect properties of the Stein solution for the generalized gamma distribution, which, according to~\eqref{27} and~\eqref{28} satisfies
\begin{eqnarray} \label{41} &\displaystyle f(x) := f_h(x) =x^{1-\alpha}e^{x^\beta} \int_0^x \tilde h(z)z^{\alpha-1} e^{-z^\beta}\,dz,& \nonumber\\[-8pt]\\[-8pt]\nonumber &\displaystyle f'(x)+ \biggl(\frac{\alpha-1}{x}-\beta x^{\beta-1} \biggr)f(x)=\tilde h(x).& \end{eqnarray}
First, we record a straightforward application of Lemmas~\ref{lem9} and~\ref{lem13}.
\begin{lemma}\label{lem14} If $f$ is given by~\eqref{41}, then
\[ \Vert f\Vert\leq\Vert\tilde h\Vert M'_{\alpha,\beta}, \qquad\bigl\Vert f'\bigr\Vert\leq2\Vert\tilde h\Vert. \]
\end{lemma}
\begin{lemma}\label{lem15} If $f$ is given by~\eqref{41}, $x>0$, $\vert t\vert \leq b\leq1$, and $x+t>0$, then for $\beta=1$ and $\beta\geq2$,
\begin{eqnarray*} && \bigl\vert(x+t)^{\beta-1}f(x+t) - x^{\beta-1}f(x)\bigr\vert \\ &&\qquad\leq\Vert\tilde{h}\Vert b \bigl[ (\beta-1) \bigl(1+2^{\beta -2} \bigl(x^{\beta-1}+ b^{\beta-1}\bigr)\bigr) M'_{\alpha,\beta} + 2 x^{\beta-1} \bigr]=:\Vert\tilde{h}\Vert C_{b, \alpha, \beta}(x). \end{eqnarray*}
For $1<\beta<2$, we have
\begin{eqnarray*} && \bigl\vert(x+t)^{\beta-1}f(x+t) - x^{\beta-1}f(x)\bigr\vert \\ &&\qquad \leq \Vert\tilde{h}\Vert\bigl( b^{\beta-1} M'_{\alpha,\beta} + 2 b x^{\beta-1} \bigr)=:\Vert\tilde{h}\Vert C_{b, \alpha, \beta}(x). \end{eqnarray*}
\end{lemma}
\begin{pf} Observe that for all $\beta\geq1$,
\begin{eqnarray}\label{42} && \bigl\vert(x+t)^{\beta-1}f(x+t) - x^{\beta-1}f(x)\bigr\vert \nonumber \\ && \qquad\leq\bigl\vert(x+t)^{\beta-1} - x^{\beta-1}\bigr\vert \bigl\vert f(x+t)\bigr\vert+ x^{\beta-1}\bigl\vert f(x+t) - f(x)\bigr \vert \\ && \qquad\leq\bigl\vert(x+t)^{\beta-1} - x^{\beta-1}\bigr\vert \Vert f\Vert+ b x^{\beta-1} \bigl\Vert f'\bigr\Vert.\nonumber \end{eqnarray}
In all cases, we use Lemma~\ref{lem14} to bound the norms appearing in~\eqref{42}. For the remaining term, if $\beta=1$, then $\vert(x+t)^{\beta-1} - x^{\beta-1}\vert =0$ and the result follows.
If $\beta\geq2$, then the mean value theorem implies
\begin{eqnarray*} &&\bigl\vert(x+t)^{\beta-1} - x^{\beta-1}\bigr\vert\leq\vert t \vert(\beta-1) \bigl(x+\vert t\vert\bigr)^{\beta -2} \leq b (\beta-1) (x+b)^{\beta-2}. \end{eqnarray*}
Since $\beta\geq2$,
\begin{eqnarray*} &&(x+b)^{\beta-2}\leq\max\bigl\{1,(x+b)^{\beta-1} \bigr\} \leq\max \bigl\{1, 2^{\beta-2}\bigl(x^{\beta-1}+b^{\beta-1}\bigr) \bigr\}, \end{eqnarray*}
where the last inequality is H\"older's, and the result in this case follows by bounding the maximum by the sum.
For $1< \beta< 2$, since $x^{\beta-1}$ is concave and increasing, $\vert(x+t)^{\beta-1} - x^{\beta-1}\vert $ is maximized when $x=0$ and $t=b$ in which case it equals $b^{\beta-1}$. \end{pf}
\begin{lemma}\label{lem16} If $f$ is given by~\eqref{41}, and we define
\begin{eqnarray} g(x) &=& f'(x) + \frac{\alpha-1}{x} f(x),\qquad x>0, \label{43} \end{eqnarray}
then
\begin{eqnarray} g(x)&=& \tilde h(x) + \beta x^{\beta-1}f(x), \label{44} \end{eqnarray}
and for $\beta\geq1$,
\[ \Vert g\Vert\leq\Vert\tilde{h}\Vert\max\bigl\{2+(\alpha -1)M'_{\alpha,\beta}, 1+\beta M'_{\alpha,\beta} \bigr\} \leq\Vert\tilde h\Vert\bigl(2+(\beta+\alpha-1)M'_{\alpha ,\beta} \bigr). \]
\end{lemma}
\begin{pf} The fact that~\eqref{43} equals~\eqref{44} is a simple rearrangement of the second equality of~\eqref{41}.
For the bounds, if $x\geq1$, then~\eqref{43} implies
\begin{eqnarray*} &&\bigl\vert g(x)\bigr\vert\leq\bigl\Vert f'\bigr\Vert+( \alpha-1)\Vert f\Vert, \end{eqnarray*}
and if $x\leq1$, then~\eqref{44} implies
\begin{eqnarray*} &&\bigl\vert g(x)\bigr\vert\leq\Vert\tilde h\Vert+\beta\Vert f\Vert , \end{eqnarray*}
so that
\begin{eqnarray*} \Vert g\Vert&\leq&\max\bigl\{ \bigl\Vert f'\bigr\Vert+( \alpha-1)\Vert f\Vert,\Vert\tilde{h}\Vert+\beta\Vert f\Vert\bigr \}, \end{eqnarray*}
and the result follows from Lemma~\ref{lem14}. \end{pf}
The purpose of introducing $g$ in Lemma~\ref{lem16} is illustrated in the following lemma.
\begin{lemma}\label{lem17} If $f$ is a bounded function on $[0,\infty)$ with bounded derivative such that $f(0)=0$, $W\geq0$ is a random variable with $\mathbb{E} W^\beta= \alpha/\beta$, and $W^*$ has the $(\alpha ,\beta)$-generalized equilibrium distribution of $W$ as in Definition~\ref{def1}, then for $g(x)=f'(x)+(\alpha-1) x^{-1} f(x)$,
\[ \mathbb{E} g\bigl(W^*\bigr)=\beta\mathbb{E} W^{\beta-1} f(W). \]
\end{lemma}
\begin{pf} If $V_\alpha\sim\mathrm{B}(\alpha,1)$ is independent of $W^{(\beta)} $ having the $\beta$-power bias distribution of $W$, then we can set $W^*=V_\alpha W^{(\beta)} $ and
\begin{eqnarray}\label{45} \mathbb{E} f'\bigl(W^*\bigr) &=&\mathbb{E} f'\bigl(V_\alpha W^{(\beta)}\bigr) =\frac{\beta}{\alpha}\mathbb{E} W^\beta f'(V_\alpha W) \nonumber\\[-8pt]\\[-8pt]\nonumber &=& \beta\mathbb{E} W^{\beta } \int_0^1 u^{\alpha-1}f'(uW) \,du. \end{eqnarray}
The case $\alpha=1$ easily follows from performing the integration in~\eqref{45}, keeping in mind that $f(0)=0$. If $\alpha>1$, similar to the computation of~\eqref{45},
\begin{equation} (\alpha-1) \mathbb{E}\frac{f(W^*)}{W^*}=\beta(\alpha-1) \mathbb{E} W^{\beta-1} \int _0^1 u^{\alpha-2}f(uW) \,du. \label{46} \end{equation}
Applying integration by parts in~\eqref{46} and noting $f(0)=0$ yields
\begin{equation} \qquad (\alpha-1) \mathbb{E}\frac{f(W^*)}{W^*}=\beta\mathbb{E}\biggl\{W^{\beta -1} \biggl( f(W) - W\int_0^1 u^{\alpha-1} f'(uW) \,du \biggr) \biggr\},\label{47} \end{equation}
and adding the right-hand sides of~\eqref{45} and~\eqref{47} yields the lemma. \end{pf}
We are now in a position to prove our generalized gamma approximation result.
\begin{pf*}{Proof of Theorem~\ref{thm5}} Let $\delta= d_{\mathrm{K}}(\mathscr{L}(W),\mathscr{L}(Z) )$ and let $h_{s,\varepsilon}$ be the smoothed indicators defined at~\eqref{haha} in Lemma~\ref{lem10}. From Lemmas~\ref{lem10} and~\ref{lem12}, we have for every $\varepsilon>0$,
\begin{equation} \delta\leq\sup_{s>0}\bigl\vert\mathbb{E} h_{s,\varepsilon}(W)-\mathbb{E} h_{s,\varepsilon}(Z)\bigr\vert+ M_{\alpha,\beta}\varepsilon. \label{48} \end{equation}
Fix $\varepsilon$ and $s$, let $f$ solve the Stein equation given explicitly by~\eqref{41} with $h:=h_{s,\varepsilon}$ and let $g$ be as in Lemma~\ref{lem16}. By Lemma~\ref{lem17},
\begin{eqnarray*} \mathbb{E} h(W) - \mathbb{E} h(Z) &=&\mathbb{E}\bigl\{f'(W)-B'(W)f(W) \bigr\} \\ &=&\mathbb{E}\biggl\{f'(W)- \biggl(\beta W^{\beta-1}- \frac{\alpha -1}{W} \biggr)f(W) \biggr\} \\ &=& \mathbb{E}\bigl\{g(W) - \beta W^{\beta-1} f(W) \bigr\} = \mathbb{E}\bigl\{g(W) -g \bigl(W^*\bigr) \bigr\}. \end{eqnarray*}
And we want to bound this last term since in absolute value it is equal to the first part of the bound in~\eqref{48}. With $I_1 = {\mathrm{I}}[\vert W-W^*\vert \leq b]$,
\begin{eqnarray}\label{49} && \bigl\vert\mathbb{E}\bigl\{g(W) -g\bigl(W^*\bigr) \bigr\} \bigr\vert \nonumber\\[-8pt]\\[-8pt]\nonumber &&\qquad \leq 2\Vert g\Vert\mathbb{P}\bigl[\bigl\vert W-W^*\bigr\vert>b\bigr] + \bigl \vert\mathbb{E} \bigl\{I_1 \bigl(g(W) -g\bigl(W^*\bigr) \bigr) \bigr\} \bigr\vert. \end{eqnarray}
Note from the representation~\eqref{44} of $g$, if $x>0$, $\vert t\vert \leq b\leq1$, and $x+t>0$,
\begin{eqnarray*} &&g(x+t) - g(x) = h(x+t)-h(x) + \beta\bigl((x+t)^{\beta-1}f(x+t) - x^{\beta-1}f(x) \bigr) \end{eqnarray*}
and since $\vert h(x+t)-h(x)\vert \leq\varepsilon^{-1} \int_{t \wedge 0}^{t\vee0} {\mathrm{I}}[s<x+u\leq s+\varepsilon] \,du$, we apply Lemma~\ref{lem15} to find
\begin{eqnarray}\label{50} && \bigl\vert\mathbb{E}\bigl\{I_1 \bigl(g(W) -g\bigl(W^*\bigr) \bigr) \bigr\} \bigr\vert \nonumber\\[-8pt]\\[-8pt]\nonumber &&\qquad \leq\frac{1}{\varepsilon}\sup_{s\geq0}\int _{0}^b\mathbb{P}[s < W + u \leq s + \varepsilon] \,du + C_{b,\alpha, \beta}, \end{eqnarray}
where $C_{b, \alpha, \beta}:=\mathbb{E} C_{b, \alpha, \beta}(W)$ and $C_{b, \alpha, \beta}(x)$ is defined in Lemma~\ref{lem15}; and observe that for $1<\beta<2$, $ C_{b,\alpha, \beta}$ is bounded since $\mathbb{E} W^{\beta-1} \leq(\mathbb{E} {W^\beta})^{(\beta-1)/\beta}$.
Now using Lemmas~\ref{lem11} and~\ref{lem12} to find
\[ \mathbb{P}[s < W + u \leq s + \varepsilon]\leq M_{\alpha,\beta}\varepsilon+2\delta \]
and combining~\eqref{48},~\eqref{49} and~\eqref{50}, we have
\[ \delta\leq M_{\alpha,\beta}\varepsilon+ 2\Vert g\Vert\mathbb{P}\bigl[\bigl \vert W-W^* \bigr\vert> b\bigr] + C_{b,\alpha, \beta}+b M_{\alpha,\beta}+2 \varepsilon^{-1}b \delta. \]
Applying Lemma~\ref{lem16} to bound $\Vert g\Vert $, setting $\varepsilon= 4b$, and solving for $\delta$ now yields the bounds of the theorem. \end{pf*}
\section*{Acknowledgments} A portion of the work for this project was completed when Nathan Ross was at University of California, Berkeley. Erol A. Pek\"oz would like to thank the Department of Statistics and Applied Probability, National University of Singapore, for its hospitality. We also thank Shaun McKinlay for pointing us to the reference~\citet {Pakes1992}, Larry Goldstein for the suggestion to study Rayleigh limits in bridge random walks, Louigi Addario-Berry for helpful discussions, Jim Pitman for detailed pointers to the literature connecting trees, walks and urns, and the Associate Editor and referee for their many detailed and useful comments which have greatly improved this work.
\printaddresses
\end{document}
|
arXiv
|
\begin{document}
\title{Global Nonlinear Stability of Geodesic Solutions of Evolutionary Faddeev Model} \begin{abstract} In this paper, for evolutionary Faddeev model corresponding to maps from the Minkowski space $\mathbb{R}^{1+n}$ to the unit sphere $\mathbb{S}^2$, we show the global nonlinear stability of geodesic solutions, which are a kind of nontrivial and large solutions.
\\ \emph{keywords}: Faddeev model; Quasilinear wave equations; Global nonlinear stability.\\ \emph{2010 MSC}: 35L05; 35L72. \end{abstract}
\pagestyle{plain} \pagenumbering{arabic}
\section{ Introduction and main result } In quantum field theory, Faddeev model is an important model that describes heavy elementary particles by knotted topological solitons. It was introduced by Faddeev in \cite{MR1553682,Fadder} and is a generalization of the well-known classical nonlinear $\sigma$ model of Gell-Mann and L\'{e}vy \cite{MR0140316}, and is also related closely to the celebrated Skyrme model \cite{MR0138394}. \par Denote an arbitrary point in Minkowski space $\mathbb{R}^{1+n}$ by $(t,x)=(x^{\alpha}; 0\leq\alpha\leq n)$ and the space-time derivatives of a function by $D=(\partial_{t},\nabla)=(\partial_{\alpha}; 0\leq\alpha\leq n).$ We raise and lower indices with the Minkowski metric $\eta=(\eta_{\alpha\beta})=\eta^{-1}=(\eta^{\alpha\beta})=$diag$(1,-1,\cdots,-1)$. For the Faddeev model, the Lagrangian is given by
\begin{align}\label{labguu} \mathscr{L}({\bf{n}})=\int_{\mathbb{R}^{1+n}}~\frac{1}{2}\partial_{\mu}{\bf{n}}\cdot\partial^{\mu}{\bf{n}}-\frac{1}{4} \big(\partial_{\mu}{\bf{n}}\wedge \partial_{\nu}{\bf{n}}\big)\cdot\big(\partial^{\mu}{\bf{n}}\wedge \partial^{\nu}{\bf{n}}\big)~dxdt, \end{align} where $v_1\wedge v_2$ denotes the cross product of the vectors $v_1$ and $v_2$ in $\mathbb{R}^{3}$ and ${\bf{n}}: \mathbb{R}^{1+n} \longrightarrow \mathbb{S}^2$ is a map from the Minkowski space to the unit sphere in $\mathbb{R}^{3}$. The associated Euler-Lagrange equations take the form \begin{align}\label{elFadd} {\bf{n}}\wedge \partial_{\mu}\partial^{\mu}{\bf{n}}+\big(\partial_{\mu}\big[{\bf{n}}\cdot\big(\partial^{\mu}{\bf{n}}\wedge \partial^{\nu}{\bf{n}}\big)\big]\big)\partial_{\nu}{\bf{n}}=0. \end{align} See Faddeev \cite{MR1553682,Fadder,MR1989187} and Lin and Yang \cite{MR2376667} and references therein. \par The Faddeev model \eqref{elFadd} was introduced to model elementary particles by using continuously extended, topologically characterized, relativistically invariant, locally concentrated, soliton-like fields. The model is not only important in the area of quantum field theory but also provides many interesting and challenging mathematical problems, see for examples \cite{Cho,MR852091,MR900505,MR2036370,MR1641192,MR1168556,MR1677728}. There have been a lot of interesting results in recent years in studying mathematical issues of static Faddeev model. See Lin and Yang \cite{MR2080954, MR2070206, MR2241558, MR2376667, MR2274465} and Faddeev \cite{MR1989187}. However, the original model \eqref{elFadd} is an evolutionary system, which turns out to be unusual nonlinear wave equations enjoying the null structure and containing semilinear terms, quasilinear terms and unknowns themselves. Lei, Lin and Zhou \cite{MR2754038} is the first rigorous mathematical result on the evolutionary Faddeev model. For the evolutionary Faddeev model in $\mathbb{R}^{1+2}$, they gave the global well-posedness of Cauchy problem for smooth, compact supported initial data with small $H^{11}(\mathbb{R}^2)\times H^{10}(\mathbb{R}^2)$ norm. Under the assumption that the system has equivariant form, Geba, Nakanishi and Zhang \cite{MR3456696} got the sharp global regularity for the (1+2) dimensional Faddeev model with small critical Besov norm. Large data global well-posedness for the (1+2) dimensional equivariant Faddeev model can be found in Creek \cite{MR3251087} and Geba and Grillakis \cite{MR3909983}. We also refer the readers to Geba and Grillakis's recent monograph \cite{MR3585834} and references therein. \par As mentioned above, the equation \eqref{elFadd} for the evolution Faddeev model falls into the form of quasilinear wave equations. For Cauchy problem of quasilinear wave equations, there are many classical results on global well-posedness of small perturbation of constant trivial solutions. The global well-posedness for 3-D quasilinear wave equations with null structures and small data can be found in pioneering works Christodoulou \cite{MR820070} and Klainerman \cite{MR837683}. In the 2-D case, Alinhac \cite{MR1856402} first got the global existence of classical solutions with small data. As we known, there are few results on the global regularity of large solutions for quasilinear wave equations. But for some important physical models, the stability of some kind of special large solutions can be studied. For example, for timelike extremal surface equations, codimension one stability of the catenoid was studied in Donninger, Krieger, Szeftel
and Wong \cite{MR3474816}. Liu and Zhou \cite{ZhouLiu} considered the stability of travelling wave solutions when $n=2$, and Abbrescia and Wong \cite{Wong} treated the $n\geq 3$ case. Some results on global nonlinear stability of large solutions for 3-D nonlinear wave equations with null conditions can be found in Alinhac \cite{MR2603759} and Yang \cite{MR3366921}. \par The main purpose of this paper is to investigate the global nonlinear stability of geodesic solutions of the evolutionary Faddeev model, which are a kind of nontrivial and large solutions. The stability of such solutions was first considered by Sideris in the context of wave maps on $\mathbb{R}^{1+3}$ \cite{MR973742}. Firstly, we rewrite the system \eqref{elFadd} in spherical coordinates. Let \begin{align}\label{npolar} {\bf{n}}=(\cos \theta\cos \phi, \cos \theta\sin \phi, \sin \theta)^{\mathbb{T}} \end{align} be a vector in the unit sphere. Here $\theta: \mathbb{R}^{1+n} \longrightarrow [-\pi,\pi]$ and $\phi: \mathbb{R}^{1+n} \longrightarrow [-\frac{\pi}{2},\frac{\pi}{2}]$ stand for the latitude and longitude, respectively. Substituting \eqref{npolar} into \eqref{labguu}, we have that the Lagrangian \eqref{labguu} equals to \begin{align}\label{labguu2}
\mathscr{L}(\theta, \phi) &=\int_{\mathbb{R}^{1+n}}~\frac{1}{2}Q(\theta,\theta)+\frac{1}{2}\cos^2\theta ~Q(\phi,\phi)-\frac{1}{4}\cos^2\theta~ Q_{\mu\nu}(\theta,\phi)Q^{\mu\nu}(\theta,\phi)~dxdt, \end{align} where the null forms
\begin{align}\label{nui899} Q(f,g)=\partial_{\mu}f\partial^{\mu}g \end{align} and \begin{align}\label{null2}
Q_{\mu\nu}(f,g)=\partial_{\mu}f\partial_{\nu}g-\partial_{\nu}f\partial_{\mu}g. \end{align} By \eqref{labguu2} and Hamilton's principle, we can get the Euler-Lagrange equations with the following form
\begin{align}\label{syst1} \begin{cases} \Box \theta=F(\theta, D \theta, D\phi, D^2\theta, D^2\phi),\\ \Box \phi=G(\theta, D \theta, D\phi, D^2\theta, D^2\phi), \end{cases} \end{align} where $\Box=\partial_t^2-\Delta$ is the wave operator on $\mathbb{R}^{1+n}$, \begin{align} &F(\theta, D \theta, D\phi, D^2\theta, D^2\phi)\nonumber\\ &=-\frac{1}{2}\sin(2\theta)Q(\phi,\phi)-\frac{1}{4}\sin(2\theta)Q_{\mu\nu}(\theta,\phi)Q^{\mu\nu}(\theta,\phi)\nonumber\\ &~~~-\frac{1}{2}\cos^2\theta Q_{\mu\nu} \big(\phi,Q^{\mu\nu}(\theta,\phi)\big) \end{align} and \begin{align} &G(\theta, D \theta, D\phi, D^2\theta, D^2\phi)\nonumber\\ &=\sin^2\theta \Box \phi+\sin(2\theta)Q(\theta,\phi)+\frac{1}{2}\cos^2\theta Q_{\mu\nu} \big(\theta,Q^{\mu\nu}(\theta,\phi)\big). \end{align} \par We note that if $\Theta=\Theta(t,x)$ satisfies the linear wave equation \begin{align}\label{xianxing} \partial_t^2\Theta-\Delta \Theta=0, \end{align} then $(\theta,\phi)=(\Theta, 0)$ satisfies the system \eqref{syst1}. In this case, ${\bf{n}}=(\cos \Theta, 0, \sin \Theta)^{\mathbb{T}}$ lies in geodesics on $\mathbb{S}^2$ (i.e. big circles). Thus following the definition in Sideris \cite{MR973742}, we call such solution as geodesic solutions. \par In this paper, we will investigate the global nonlinear stability of such geodesic solutions of Faddeev model, i.e., the solution $(\Theta, 0)$ of system \eqref{syst1} on $\mathbb{R}^{1+n}, n\geq 2$. Here we will only focus on the cases $n=2$ and $n=3$. As we known, the (1+3) dimensional Faddeev model is an important physical model in particle physics. While the (1+2) dimensional case is much more complicated than the (1+3) dimensional case from the point of mathematical treating. The $n\geq 4$ case can be treated by a way which is the same with the $n=3$ case. We note that Lei, Lin and Zhou's small data global existence result \cite{MR2754038} can be viewed as some kind of stability result for the trivial geodesic solution $(\theta,\phi)=(0,0)$ of \eqref{syst1} on $\mathbb{R}^{1+2}$.
\par The remainder of this introduction will be devoted to the description of some notations, which will be used in the sequel, and statements of global nonlinear stability theorems in $n=3$ and $n=2$ . In Section 2, some necessary tools used to prove global nonlinear stability theorems are introduced. The proof of global nonlinear stability theorems in $n=3$ and $n=2$ will be given in Section 3 and Section 4, respectively.\par \subsection{Notations} Firstly, we introduce some vector fields as in Klainerman \cite{MR784477}. Denote the collection of spatial rotations $ \Omega=(\Omega_{ij}; 1\leq i<j\leq n)$, where $\Omega_{ij}=x_i\partial_j-x_j\partial_i, $ the scaling operator $ S=t\partial_t+x_i\partial_i, $ and the collection of Lorentz boost operators $L=(L_i:1\leq i\leq n)$, $ L_i=t\partial_i+x_i\partial_t,~i=1,\cdots,n. $ Define the vector fields $ \Gamma=(D,\Omega, S,L)=(\Gamma_1,\dots,\Gamma_N), N=2+2n+\frac{(n-1)n}{2}. $ For any given multi-index $a=(a_1,\dots,a_{N}),$ we denote $ \Gamma^{a}=\Gamma_1^{a_1}\cdots\Gamma_{N}^{a_{N}}. $ It can be verified that (see \cite{MR1047332}) \begin{align}\label{gooddecay345}
|D u|\leq C\langle t-r\rangle^{-1}|\Gamma u|, \end{align}
where $\langle \cdot\rangle=(1+|\cdot|^2)^{\frac{1}{2}}$. We will also introduce the good derivatives (see \cite{MR1856402}) \begin{align}\label{googder} T_{\mu}=\omega_{\mu}\partial_t+\partial_{\mu}, \end{align}
where $\omega_0=-1, \omega_i=x_i/r~(i=1,\cdots,n), r=|x|$. Denote $T=(T_0,T_1,\cdots,T_n)$. Compared with \eqref{gooddecay345}, we have the following decay estimate: \begin{align}\label{gooddecay}
|T u|\leq C\langle t+r\rangle^{-1}|\Gamma u|. \end{align} \par The energy associated to the linear wave operator is defined as
\begin{align}
E_1(u(t))=\frac{1}{2}\int_{\mathbb{R}^{n}} \big(|\partial_tu(t,x)|^{2}+ |\nabla u(t,x)|^{2}\big)\, dx, \end{align} and the corresponding $k$-th order energy is given by \begin{align}\label{kord}
E_{k}(u(t))=\sum_{|a|\leq k-1} {E}_1(\Gamma^{a}u(t)). \end{align} \par For getting the global stability of geodesic solutions when $n=2$, we will use some space-time weighted energy estimates and pointwise estimates.
Let $\sigma=t-r$, $q(\sigma)=\arctan\sigma,
q'(\sigma)=\frac{1}{1+\sigma^2}=\langle t-r\rangle^{-2}$. Since $q$ is bounded, there exists a constant $c>1$, such that \begin{align}\label{noting} c^{-1}\leq e^{-q(\sigma)}\leq c. \end{align}
Following Alinhac \cite{MR1856402}, we can introduce the \lq\lq ghost weight energy"
\begin{align}
\mathcal {E}_1(u(t))=\frac{1}{2}\int_{\mathbb{R}^{n}}e^{-q(\sigma)}\left<t-r\right>^{-2}{|Tu|^2}\, dx
\end{align}
and its $k$-th order version
\begin{align}
\mathcal {E}_k(u(t))= \sum_{|a|\leq k-1}\mathcal {E}_1(\Gamma^{a}u(t)).
\end{align} We will also introduce the following weighted~$L^{\infty}$ norm \begin{align}
\mathcal {X}_0(u(t))=\big\|\langle t+|\cdot|\rangle^{\frac{n-1}{2}}\langle t-|\cdot|\rangle^{\frac{n-1}{2}}u(t,\cdot)\big\|_{L^{\infty}(\mathbb{R}^{n})}, \end{align} and its $k$-th order version \begin{align}\label{kord555}
\mathcal{X}_{k}(u(t))=\sum_{|a|\leq k} {\mathcal {X}}_0(\Gamma^{a}u(t)). \end{align} \par For the convenience, for any integer $k$ and $1\leq p\leq +\infty$, we will use the following notations \begin{equation}
\|u(t,\cdot)\|_{W^{k,p}(\mathbb{R}^n)}=\sum_{|a|\leq k}\|\nabla^{a}u(t,\cdot)\|_{L^{p}(\mathbb{R}^n)}, \end{equation} \begin{equation}
\|u(t,\cdot)\|_{\dot{W}^{k,p}(\mathbb{R}^n)}=\sum_{|a|=
k}\|\nabla^{a}u(t,\cdot)\|_{L^{p}(\mathbb{R}^n)}, \end{equation} \begin{equation}
|u(t,\cdot)|_{\Gamma,k}=\sum_{|a|\leq k}|\Gamma^{a}u(t,\cdot)| \end{equation} and \begin{equation}
\|u(t,\cdot)\|_{\Gamma,k,p}=\sum_{|a|\leq k}\|\Gamma^{a}u(t,\cdot)\|_{L^{p}(\mathbb{R}^n)}. \end{equation} \subsection{Main results} In this subsection, we will give the global stability results of geodesic solutions to Faddeev model in three and two dimensions. \par Let $\Theta=\Theta(t,x)$ satisfy \begin{align}\label{xianxingeeeee} \begin{cases} \partial_t^2\Theta-\Delta \Theta=0,~(t,x)\in \mathbb{R}^{1+n}, \\ t=0: \Theta=\Theta_0(x) , \partial_t\Theta=\Theta_1(x), \end{cases} \end{align} where the initial data $\Theta_0$ and $\Theta_1$ are smooth and satisfy \begin{align}\label{hju7899}
\Theta_0(x)=\Theta_1(x)=0,~~|x|\geq 1. \end{align} \par
\par In the following, we will consider the stability of the geodesic solution $(\Theta, 0)$ of system \eqref{syst1}. Let \begin{align}\label{shjui89} (\theta,\phi)=(u+\Theta, v). \end{align} We can easily get the equation of $(u,v)$ as following \begin{align}\label{syst3} \begin{cases} \Box u=F(u+\Theta, D(u+\Theta), Dv, D^2(u+\Theta), D^2v),\\ \Box v=G(u+\Theta, D(u+\Theta), Dv, D^2(u+\Theta), D^2v). \end{cases} \end{align}
It is obvious that the stability of the geodesics solution $(\Theta, 0)$ of system \eqref{syst1} is equivalent to the stability of zero solution of \eqref{syst3}. Thus we will consider the Cauchy problem of the perturbed system \eqref{syst3} with initial data \begin{align}\label{xuyaoop} t=0: u= u_0(x), \partial_tu= u_1(x),~ v= v_0(x), \partial_tv= v_1(x). \end{align}
\par For introducing the geodesic solution, we note that there are some linear terms in the equation of $v$ in system \eqref{syst3}. Thus in order to ensure the hyperbolicity, we should give some further assumptions on the initial data $(\Theta_0,\Theta_1)$ of system \eqref{xianxingeeeee}.
When $n=3$, we further assume that \begin{align}\label{2mmn}
\lambda_0&=\|\Theta_0\|_{\dot{W}^{3,1}(\mathbb{R}^3)}+\|\Theta_1\|_{\dot{W}^{2,1}(\mathbb{R}^3)}<4\pi^2,\\\label{x867uii}
\lambda_1&=\|\Theta_0\|_{\dot{W}^{4,1}(\mathbb{R}^3)}+\|\Theta_1\|_{\dot{W}^{3,1}(\mathbb{R}^3)}<8\pi,\\\label{hu788}
\lambda&=\|\Theta_0\|_{H^{8}(\mathbb{R}^3)}+\|\Theta_1\|_{H^{7}(\mathbb{R}^3)}<+\infty. \end{align}
Having set down the necessary notation and formulated Cauchy problem of perturbed system, we are now ready to record our first main result to be proved. The first main result in this paper is the following \begin{thm}\label{mainthm}
When $n=3$, assume that $\Theta_0$ and $\Theta_1$ satisfy \eqref{hju7899}, \eqref{2mmn}--\eqref{hu788}, $\Theta$ satisfies \eqref{xianxingeeeee} and $u_0, u_1, v_0$ and $v_1$ are smooth and supported in $|x|\leq 1$. Then there exist positive constants $A$ and $\varepsilon_0$ such that for any $ 0<\varepsilon\leq\varepsilon_0,$ if \begin{align}
\|u_0\|_{H^{7}(\mathbb{R}^3)}+\|u_1\|_{H^{6}(\mathbb{R}^3)}+\|v_0\|_{H^{7}(\mathbb{R}^3)}+\|v_1\|_{H^{6}(\mathbb{R}^3)}\leq \varepsilon, \end{align} then Cauchy problem \eqref{syst3}--\eqref{xuyaoop} admits a unique global classical solution $(u,v)$ satisfying \begin{align}\label{labejk99} \sup_{0\leq t\leq T}\big(E_{7}^{\frac{1}{2}}(u(t))+E_{7}^{\frac{1}{2}}(v(t))\big)\leq A\varepsilon \end{align} for any $T>0$. \end{thm}
When $n=2$, we will assume that \begin{align}\label{3mmn}
\widetilde{\lambda}_0&=\|\Theta_0\|_{\dot{W}^{2,1}(\mathbb{R}^2)}+\|\Theta_1\|_{\dot{W}^{1,1}(\mathbb{R}^2)}<2\pi,\\\label{xui89755hjj}
\widetilde{\lambda}_1&=\|\Theta_0\|_{\dot{W}^{3,1}(\mathbb{R}^2)}+\|\Theta_1\|_{\dot{W}^{2,1}(\mathbb{R}^2)}<4,\\\label{67yu969}
\widetilde{\lambda}&=\|\Theta_0\|_{W^{10,1}(\mathbb{R}^2)}+\|\Theta_1\|_{W^{9,1}(\mathbb{R}^2)}<+\infty. \end{align} \par The second main result in this paper is the following \begin{thm}\label{mainthm2}
When $n=2$, assume that $\Theta_0$ and $\Theta_1$ satisfy \eqref{hju7899}, \eqref{3mmn}--\eqref{67yu969}, $\Theta$ satisfies \eqref{xianxingeeeee} and $u_0, u_1, v_0$ and $v_1$ are smooth and supported in $|x|\leq 1$. Then there exist positive constants $A_1, A_2$ and $\varepsilon_0$ such that for any $ 0<\varepsilon\leq\varepsilon_0,$ if \begin{align}
\|u_0\|_{H^{7}(\mathbb{R}^2)}+\|u_1\|_{H^{6}(\mathbb{R}^2)}+\|v_0\|_{H^{7}(\mathbb{R}^2)}+\|v_1\|_{H^{6}(\mathbb{R}^2)}\leq \varepsilon, \end{align} then Cauchy problem \eqref{syst3}--\eqref{xuyaoop} admits a unique global classical solution $(u,v)$ satisfying \begin{align}\label{labejk99} \sup_{0\leq t\leq T}\big(E_{7}^{\frac{1}{2}}(u(t))+E_{7}^{\frac{1}{2}}(v(t))\big)\leq A_1\varepsilon~~\text{and} \sup_{0\leq t\leq T}\big(\mathcal {X}_{4}(u(t))+\mathcal {X}_{4}(v(t))\big)\leq A_2\varepsilon \end{align} for any $T>0$. \end{thm}
\section{Preliminaries} \subsection{Commutation relations} The following lemma concerning the commutation relation between general derivatives, the wave operator and the vector fields was first established by Klainerman \cite{MR784477}. \begin{Lemma}\label{LEM2134} For any given multi-index $a = (a_1, \dots , a_N)$, we have \begin{align}\label{shiyi}
[D,\Gamma^{a}]u=\sum_{|b|\leq |a|-1}c_{ab}D\Gamma^{b}u,\\
[\Box,\Gamma^{a}]u=\sum_{|b|\leq |a|-1}C_{ab}\Gamma^{b}\Box u, \end{align} where $[\cdot,\cdot]$ stands for the Poisson's bracket, i.e.,~$[A,B]=AB-BA,$ and $c_{ab}$ and $C_{ab}$ are constants.
\end{Lemma} The following relationship between the vector field $\Gamma$ and null forms can be found in Klainerman \cite{MR837683} . \begin{Lemma}\label{lem2345} For null forms $Q(u,v)$ and $Q_{\mu\nu}(u,v)$, we have
\begin{align}
\Gamma Q(u,v)=Q(\Gamma u,v)+Q(u,\Gamma v)+\widetilde{Q}(u,v),\\
\Gamma Q_{\mu\nu}(u,v)=Q_{\mu\nu}(\Gamma u,v)+Q_{\mu\nu}(u,\Gamma v)+\widetilde{Q}_{\mu\nu}(u,v),
\end{align}
where $\widetilde{Q}(u,v)$ and $\widetilde{Q}_{\mu\nu}(u,v)$ are some linear combinations of null forms $Q(u,v)$ and $Q_{\mu\nu}(u,v)$. \end{Lemma}
\subsection{Null form estimates}
The following lemma gives some good decay property concerning the wave operator. \begin{Lemma}\label{uu679yui} We have \begin{align}\label{gjyuii}
(1+ t) |\Box u|\leq C\sum_{|b|\leq 1}|D\Gamma^{b}u|. \end{align} \end{Lemma} \begin{proof} First, we have the equality \begin{align}\label{hj7899999} t\Box u=(t\partial_t+x_i\partial_i)\partial_tu-(x_i\partial_t+t\partial_i)\partial_iu=S\partial_tu-L_i\partial_iu. \end{align} Then \eqref{gjyuii} follows from \eqref{hj7899999} and \eqref{shiyi}. \end{proof} \begin{Lemma}\label{QLyyy} For null forms $Q(u,v)$ and $Q_{\mu\nu}(u,v)$, we have \begin{align}\label{xkktyyyhu}
|Q(u,v)|+|Q_{\mu\nu}(u,v)|\leq C|Du||Tv|+C|Tu||Dv|. \end{align} \end{Lemma} \begin{proof} By definitions of the null forms \eqref{nui899} and \eqref{null2}, and the good derivatives \eqref{googder}, we have pointwise equalities \begin{align}\label{fr566999966} Q(u,v)=T_{\mu}u\partial^{\mu}v-\omega_{\mu}\partial_tuT^{\mu}v \end{align} and \begin{align}\label{fr56666} Q_{\mu\nu}(u,v)=T_{\mu}u\partial_{\nu}v-T_{\nu}u\partial_{\mu}v-\omega_{\mu}\partial_tuT_{\nu}v+\omega_{\nu}\partial_tuT_{\mu}v. \end{align} \eqref{xkktyyyhu} is just a direct consequence of \eqref{fr566999966} and \eqref{fr56666}. \end{proof} \begin{Lemma}\label{QL} For null forms $Q(u,v)$ and $Q_{\mu\nu}(u,v)$, we have \begin{align}\label{xuyaotyyyhu}
&|\Gamma^aQ(u,v)|+|\Gamma^aQ_{\mu\nu}(u,v)|\leq C\sum_{|b|+|c|\leq |a|}\big(|D\Gamma^{b}u||T\Gamma^{c}v|+|T\Gamma^{b}u||D\Gamma^{c}v|\big) \end{align} and \begin{align}\label{xuyaohu}
&|\Gamma^aQ(u,v)|+|\Gamma^aQ_{\mu\nu}(u,v)|
\leq C\langle t\rangle^{-1}\sum_{|b|+|c|\leq |a|}\big(|D\Gamma^{b}u||\Gamma^{c+1}v|+|\Gamma^{b+1}u||D\Gamma^{c}v|\big). \end{align} \begin{proof} \eqref{xuyaotyyyhu} is a consequence of Lemma \ref{lem2345} and Lemma \ref{QLyyy}. While \eqref{xuyaohu} follows from \eqref{xuyaotyyyhu} and \eqref{gooddecay}. \end{proof} \end{Lemma}
\subsection{Sobolev and Hardy type inequalities} For getting the decay of derivatives of solutions, we will introduce the following famous Klainerman-Sobolev inequality, which is first proved in Klainerman \cite{MR865359}. \begin{Lemma}\label{dfft6889} If $u=u(t,x)$ is a smooth function with sufficient decay at infinity, then we have \begin{align}\label{Sobo}
\langle t+r\rangle^{\frac{n-1}{2}}\langle t-r\rangle^{\frac{1}{2}}|u(t,x)|\leq C\|u(t,\cdot)\|_{\Gamma,k,2},~k>\frac{n}{2}. \end{align} \end{Lemma}
When $n=3$, we can also find the following decay estimates in Klainerman \cite{MR837683}. \begin{Lemma}\label{rt6788} If $u=u(t,x)$ is a smooth function with sufficient decay at infinity, then we have \begin{align}\label{Sobo2}
r^{\frac{1}{2}}|u(t,x)|\leq C\sum_{|a|\leq 1}\|\nabla \Omega^{a}u\|_{L^2(\mathbb{R}^3)} \end{align} and \begin{align}\label{Sobo23333}
r|u(t,x)|\leq C\sum_{|a|\leq 1}\|\nabla \Omega^{a}u\|_{L^2(\mathbb{R}^3)}+C\sum_{|a|\leq 2}\|\Omega^{a}u\|_{L^2(\mathbb{R}^3)}. \end{align} \end{Lemma}
The following Hardy type inequality, which is used to produce a general derivative, was first proved in Lindblad \cite{MR1047332}. \begin{Lemma}\label{rty7ffff78}
If $u=u(t,x)$ is a smooth function supported in $|x|\leq t+1$, then we have the following Hardy type inequality: \begin{align}\label{Hardy}
\|\langle t-r\rangle^{-1}u\|_{L^2(\mathbb{R}^n)}\leq C\|\nabla u\|_{L^2(\mathbb{R}^n)}. \end{align} \end{Lemma}
\subsection{Estimates of solutions to linear wave equations} The fundamental theorem of calculus implies the following \begin{Lemma}\label{xuyao8900} Let $f:\mathbb{R}^{+}\longrightarrow \mathbb{R}$ be a smooth function with sufficient decay at infinity. Then for any positive integer $m$, we have \begin{align} f(t)=\frac{(-1)^{m}}{(m-1)!}\int_{t}^{+\infty}(s-t)^{m-1}f^{(m)}(s)ds. \end{align} \end{Lemma} For getting the stability of geodesic solutions of Faddeev model, we will give some exact boundedness estimates for solutions to homogeneous linear wave equations in two and three dimensions. \begin{Lemma}\label{3Dbound} Let $u$ is the solution of the following three dimensional linear wave equation \begin{align} \begin{cases} \Box u(t,x)=0,~(t,x)\in \mathbb{R}^{1+3},\\ t=0: u=u_0(x),\partial_tu=u_1(x),~x\in \mathbb{R}^3, \end{cases} \end{align}
where $u_0$ and $u_1$ are smooth functions with compact supports in $|x|\leq 1$. Then we have \begin{align}\label{xuyao890}
\|u(t,\cdot)\|_{L^{\infty}(\mathbb{R}^3)}\leq \frac{1}{8\pi}\big(\|u_0\|_{\dot{W}^{3,1}(\mathbb{R}^3)}+\|u_1\|_{\dot{W}^{2,1}(\mathbb{R}^3)}\big) \end{align} and \begin{align}\label{xuyaodee890}
\|\partial_tu(t,\cdot)\|_{L^{\infty}(\mathbb{R}^3)}\leq \frac{1}{8\pi}\big(\|u_0\|_{\dot{W}^{4,1}(\mathbb{R}^3)}+\|u_1\|_{\dot{W}^{3,1}(\mathbb{R}^3)}\big). \end{align} \end{Lemma} \begin{proof}
By Poisson's formula of three dimensional linear wave equation, we have \begin{align}\label{shou89} &u(t,x)\nonumber\\
&=\frac{1}{4\pi t}\int_{|y-x|=t}u_1(y)dS_y+\partial_t\big(\frac{1}{4\pi t}\int_{|y-x|=t}u_0(y)dS_y\big)\nonumber\\
&=\frac{t}{4\pi }\int_{|\omega|=1}u_1(x+t\omega)d\omega+\partial_t\big(\frac{t}{4\pi }\int_{|\omega|=1}u_0(x+t\omega)d\omega\big)\nonumber\\
&=\frac{t}{4\pi }\int_{|\omega|=1}u_1(x+t\omega)d\omega+\frac{t}{4\pi }\int_{|\omega|=1}\partial_t\big(u_0(x+t\omega)\big)d\omega+\frac{1}{4\pi }\int_{|\omega|=1}u_0(x+t\omega)d\omega. \end{align}
Lemma \ref{xuyao8900} implies \begin{align}\label{diyi111} u_1(x+t\omega)&=\int_{t}^{+\infty}(r-t)\partial^2_r\big(u_1(x+r\omega)\big)dr,\\\label{xjk9089} \partial_t\big(u_0(x+t\omega)\big)&=\int_{t}^{+\infty}(r-t)\partial^3_r\big(u_0(x+r\omega)\big)dr,\\\label{xjk90l987889} u_0(x+t\omega)&=-\frac{1}{2}\int_{t}^{+\infty}(r-t)^2\partial^3_r\big(u_0(x+r\omega)\big)dr. \end{align} By \eqref{diyi111}, we have \begin{align}\label{sho9000}
&\Big|\frac{t}{4\pi }\int_{|\omega|=1}u_1(x+t\omega)d\omega\Big|\nonumber\\
&\leq \frac{1}{4\pi }\int_{|\omega|=1}\int_{t}^{+\infty}t(r-t)\big|\partial^2_r\big(u_1(x+r\omega)\big)\big|drd\omega\nonumber\\
&\leq \frac{1}{16\pi }\int_{|\omega|=1}\int_{t}^{+\infty}\big|\partial^2_r\big(u_1(x+r\omega)\big)\big|r^2drd\omega\nonumber\\
&\leq \frac{1}{16\pi}\|u_1\|_{\dot{W}^{2,1}(\mathbb{R}^3)}. \end{align} Thanks to \eqref{xjk9089} and \eqref{xjk90l987889}, we also have \begin{align}\label{s890huy}
&\Big|\frac{t}{4\pi }\int_{|\omega|=1}\partial_t\big(u_0(x+t\omega)\big)d\omega\Big|+\Big|\frac{1}{4\pi }\int_{|\omega|=1}u_0(x+t\omega)d\omega\Big|\nonumber\\
&\leq \frac{1}{8\pi }\int_{|\omega|=1}\int_{t}^{+\infty}(r+t)(r-t)\big|\partial^3_r\big(u_0(x+r\omega)\big)\big|drd\omega\nonumber\\
&\leq \frac{1}{8\pi }\int_{|\omega|=1}\int_{t}^{+\infty}\big|\partial^3_r\big(u_0(x+r\omega)\big)\big|r^2drd\omega\nonumber\\
&\leq \frac{1}{8\pi}\|u_0\|_{\dot{W}^{3,1}(\mathbb{R}^3)}. \end{align} Thus, the estimate \eqref{xuyao890} follows from \eqref{shou89}, \eqref{sho9000} and \eqref{s890huy}. Note that $\partial_tu$ satisfies \begin{align} \begin{cases} \Box \partial_tu(t,x)=0,~(t,x)\in \mathbb{R}^{1+3},\\ t=0: \partial_tu=u_1,\partial^2_tu=\Delta u_0,~x\in \mathbb{R}^3. \end{cases} \end{align} Therefore, we can get estimate \eqref{xuyaodee890} similarly. \end{proof} \begin{rem}\label{rem45888} Note that the function $\Theta(t,x)$ satisfies Cauchy problem \eqref{xianxingeeeee}. It follows from Lemma {\rm{\ref{3Dbound}}}, \eqref{2mmn} and \eqref{x867uii} that \begin{align}\label{xuyao86667790}
|\Theta(t,x)|\leq \frac{1}{8\pi}\big(\|\Theta_0\|_{\dot{W}^{3,1}(\mathbb{R}^3)}+\|\Theta_1\|_{\dot{W}^{2,1}(\mathbb{R}^3)}\big)\leq \frac{1}{8\pi}\lambda_0<\frac{\pi}{2} \end{align} and \begin{align}\label{xuyao666776dee890}
|\partial_t\Theta(t,x)|\leq \frac{1}{8\pi}\big(\|\Theta_0\|_{\dot{W}^{4,1}(\mathbb{R}^3)}+\|\Theta_1\|_{\dot{W}^{3,1}(\mathbb{R}^3)}\big)\leq \frac{1}{8\pi}\lambda_1<1. \end{align} \end{rem}
We can also get the following pointwise estimate of linear wave equations in two dimensions.
\begin{Lemma}\label{2Dbound} Let $u$ is the solution of the following two dimensional linear wave equation \begin{align} \begin{cases} \Box u(t,x)=0,~(t,x)\in \mathbb{R}^{1+2},\\ t=0: u=u_0(x),\partial_tu=u_1(x),~x\in \mathbb{R}^2, \end{cases} \end{align}
where $u_0$ and $u_1$ are smooth functions with compact supports in $|x|\leq 1$. Then we have \begin{align}\label{hj89900}
\|u(t,\cdot)\|_{L^{\infty}(\mathbb{R}^2)}\leq \frac{1}{4}\big(\|u_0\|_{\dot{W}^{2,1}(\mathbb{R}^2)}+\|u_1\|_{\dot{W}^{1,1}(\mathbb{R}^2)}\big) \end{align} and \begin{align}\label{hjudddddi8900}
\|\partial_tu(t,\cdot)\|_{L^{\infty}(\mathbb{R}^2)}\leq \frac{1}{4}\big(\|u_0\|_{\dot{W}^{3,1}(\mathbb{R}^2)}+\|u_1\|_{\dot{W}^{2,1}(\mathbb{R}^2)}\big). \end{align} \end{Lemma} \begin{proof}
By Poisson's formula of 2-D linear wave equation, we have \begin{align}\label{xu89uioo} &u(t,x)\nonumber\\
&=\frac{1}{2\pi }\int_{|y-x|\leq t}\frac{u_1(y)}{\sqrt{t^2-|y-x|^2}}dy+\partial_t\big(\frac{1}{2\pi }\int_{|y-x|\leq t}\frac{u_0(y)}{\sqrt{t^2-|y-x|^2}}dy\big)\nonumber\\
&=\frac{1}{2\pi }\int_0^{t}\int_{|\omega|=1}\frac{u_1(x+r\omega)}{\sqrt{t^2-r^2}}rd\omega dr+\frac{1}{2\pi t}\int_0^{t}\int_{|\omega|=1}\frac{\partial_r\big(u_0(x+r\omega)\big)}{\sqrt{t^2-r^2}}r^2d\omega dr\nonumber\\
&+\frac{1}{2\pi t}\int_0^{t}\int_{|\omega|=1}\frac{u_0(x+r\omega)}{\sqrt{t^2-r^2}}rd\omega dr. \end{align} By Lemma \ref{xuyao8900}, we get \begin{align}\label{impi98756} u_1(x+r\omega)&=-\int_{r}^{+\infty}\partial_{\rho}\big(u_1(x+\rho\omega)\big)d\rho,\\\label{xu89dddd876} \partial_r\big(u_0(x+r\omega)\big)&=-\int_{r}^{+\infty}\partial^2_{\rho}\big(u_0(x+\rho\omega)\big)d\rho,\\\label{impddddddi98756} u_0(x+r\omega)&=\int_{r}^{+\infty}(\rho-r)\partial^2_{\rho}\big(u_0(x+\rho\omega)\big)d\rho. \end{align}
Then, \eqref{impi98756} implies \begin{align}\label{xuyao1fff}
&\Big|\frac{1}{2\pi }\int_0^{t}\int_{|\omega|=1}\frac{u_1(x+r\omega)}{\sqrt{t^2-r^2}}rd\omega dr\Big|\nonumber\\
&\leq \frac{1}{2\pi }\int_0^{t}\frac{1}{\sqrt{t^2-r^2}} dr\sup_{0\leq r\leq t} \int_{r}^{+\infty}\int_{|\omega|=1}\big|\partial_{\rho}\big(u_1(x+\rho\omega)\big)\big|r d\omega d\rho\nonumber\\
&\leq \frac{1}{2\pi }\frac{\pi}{2}\sup_{0\leq r\leq t} \int_{r}^{+\infty}\int_{|\omega|=1}\big|\partial_{\rho}\big(u_1(x+\rho\omega)\big)\big|\rho d\omega d\rho\nonumber\\
&\leq \frac{1}{4}\|u_1\|_{\dot{W}^{1,1}(\mathbb{R}^2)}. \end{align} The combination of \eqref{xu89dddd876} and \eqref{impddddddi98756} gives \begin{align}\label{xuyao5681}
&\Big|\frac{1}{2\pi t}\int_0^{t}\int_{|\omega|=1}\frac{\partial_r\big(u_0(x+r\omega)\big)}{\sqrt{t^2-r^2}}r^2d\omega dr\Big|+\Big|
\frac{1}{2\pi t}\int_0^{t}\int_{|\omega|=1}\frac{u_0(x+r\omega)}{\sqrt{t^2-r^2}}rd\omega dr\Big|\nonumber\\
&\leq \frac{1}{2\pi }\int_0^{t}\frac{1}{\sqrt{t^2-r^2}} dr\sup_{0\leq r\leq t} \int_{r}^{+\infty}\int_{|\omega|=1}\big|\partial^2_{\rho}\big(u_0(x+\rho\omega)\big)\big|\frac{r^2+r(\rho-r)}{t} d\omega d\rho\nonumber\\
&\leq \frac{1}{2\pi }\frac{\pi}{2}\sup_{0\leq r\leq t} \int_{r}^{+\infty}\int_{|\omega|=1}\big|\partial^2_{\rho}\big(u_1(x+\rho\omega)\big)\big|\rho d\omega d\rho\nonumber\\
&\leq \frac{1}{4}\|u_0\|_{\dot{W}^{2,1}(\mathbb{R}^2)}. \end{align}
Thus \eqref{hj89900} follows from \eqref{xu89uioo}, \eqref{xuyao1fff} and \eqref{xuyao5681}. Noting that $\partial_tu$ satisfies \begin{align} \begin{cases} \Box \partial_tu(t,x)=0,~(t,x)\in \mathbb{R}^{1+2},\\ t=0: \partial_tu=u_1,\partial^2_tu=\Delta u_0,~x\in \mathbb{R}^2, \end{cases} \end{align} we can get \eqref{hjudddddi8900} similarly. \end{proof} \begin{rem}\label{rem9999888} Note that the function $\Theta(t,x)$ satisfies Cauchy problem \eqref{xianxingeeeee}. It follows from Lemma {\rm{\ref{2Dbound}}}, \eqref{3mmn} and \eqref{xui89755hjj} that \begin{align}\label{xuyao86669999}
|\Theta(t,x)|\leq \frac{1}{4}\big(\|\Theta_0\|_{\dot{W}^{2,1}(\mathbb{R}^2)}+\|\Theta_1\|_{\dot{W}^{1,1}(\mathbb{R}^2)}\big)\leq \frac{1}{4}\widetilde{\lambda}_0<\frac{\pi}{2} \end{align} and \begin{align}\label{xuyao998878ee890}
|\partial_t\Theta(t,x)|\leq \frac{1}{4}\big(\|\Theta_0\|_{\dot{W}^{3,1}(\mathbb{R}^2)}+\|\Theta_1\|_{\dot{W}^{2,1}(\mathbb{R}^2)}\big)\leq \frac{1}{4}\widetilde{\lambda}_1<1. \end{align} \end{rem}
The following lemma on $L^1$--$L^{\infty}$ estimates can be found in H\"{o}rmander \cite{MR956961} and Klainerman \cite{MR733719}. \begin{Lemma}\label{Linfty} Let $u$ satisfy \begin{align}\label{Linearwave} \begin{cases} \Box u(t,x)=F(t,x),~(t,x)\in \mathbb{R}^{1+n},\\ t=0: u=u_0,\partial_tu=u_1,~x\in \mathbb{R}^n, \end{cases} \end{align}
where the initial data $u_0$ and $u_1$ are supported in $|x|\leq 1$. Then we have \begin{align}
&\|\langle t+|\cdot|\rangle^{\frac{n-1}{2}}\langle t-|\cdot|\rangle^{l}u(t,\cdot)\|_{L^{\infty}(\mathbb{R}^n)}\nonumber\\
&\leq C\big(\|u_0\|_{W^{n,1}(\mathbb{R}^n)}+\|u_1\|_{W^{n-1,1}(\mathbb{R}^n)}\big) + C\int_0^t(1+\tau)^{-\frac{n-1}{2}+l}\|F(\tau,\cdot)\|_{\Gamma,n-1,1}d\tau, \end{align} where $0\leq l\leq \frac{n-1}{2}$. \end{Lemma} \subsection{Some estimates on product functions and composite functions} For getting the estimates of nonlinear terms, we will give the following estimates on product functions and composite functions.
\begin{Lemma}\label{daoshu}
Assume that $u$ and $v$ are smooth functions supported in $|x|\leq t+1$. Then we have \begin{align}\label{xui89njkhhuio}
\|u Dv\|_{L^{\infty}(\mathbb{R}^3)} \leq C\langle t\rangle^{-\frac{3}{2}}E_{3}^{\frac{1}{2}}(u(t))E_{3}^{\frac{1}{2}}(v(t)). \end{align} \end{Lemma} \begin{proof} Without loss of generality, we can assume $t\geq 2$. When $t\leq 2$, \eqref{xui89njkhhuio} is just a consequence of the following Sobolev inequality \begin{align}\label{xuyooo}
\|u\|_{L^{\infty}(\mathbb{R}^3)}\leq C\|\nabla u\|_{L^{2}(\mathbb{R}^3)}+C\|\nabla^2 u\|_{L^{2}(\mathbb{R}^3)}. \end{align} It follows from Klainerman-Sobolev inequality \eqref{Sobo} for $n=3$ and \eqref{xuyooo} that \begin{align}\label{zhuiyuddf}
&\|uDv\|_{L^{\infty}(r\leq t/4)} \nonumber\\
&\leq C\langle t\rangle^{-\frac{3}{2}} \|u\|_{L^{\infty}(\mathbb{R}^3)}\|\langle t+r\rangle\langle t-r\rangle^{\frac{1}{2}}Dv\|_{L^{\infty}(r\leq t/4)}
\nonumber\\ &\leq C\langle t\rangle^{-\frac{3}{2}}E^{\frac{1}{2}}_3(u(t))E^{\frac{1}{2}}_3(v(t)). \end{align} By Klainerman-Sobolev inequality \eqref{Sobo} for $n=3$ and \eqref{Sobo2}, we have \begin{align}\label{zhufffiyuddf}
&\|uDv\|_{L^{\infty}(r\geq t/4)} \nonumber\\
&\leq C\langle t\rangle^{-\frac{3}{2}} \|r^{1/2}u\|_{L^{\infty}(r\geq t/4)}\|\langle t+r\rangle Dv\|_{L^{\infty}(\mathbb{R}^3)}
\nonumber\\ &\leq C\langle t\rangle^{-\frac{3}{2}}E^{\frac{1}{2}}_3(u(t))E^{\frac{1}{2}}_3(v(t)). \end{align} Therefor, noting \eqref{zhuiyuddf} and \eqref{zhufffiyuddf}, we can get the estimate \eqref{xui89njkhhuio}. \end{proof} \begin{Lemma}\label{benshen}
Assume that $u$ and $v$ are smooth functions supported in $|x|\leq t+1$. Then we have \begin{align}
\|u \langle t-r\rangle^{-1}v\|_{L^{\infty}(\mathbb{R}^3)} \leq C\langle t\rangle^{-\frac{3}{2}}E_{3}^{\frac{1}{2}}(u(t))E_{3}^{\frac{1}{2}}(v(t)). \end{align} \end{Lemma} \begin{proof} Without loss of generality, we can assume $t\geq 2$. We have \begin{align}\label{infwww899}
&\|u \langle t-r\rangle^{-1}v\|_{L^{\infty}(r\geq t/4)}\leq C\langle t\rangle^{-\frac{3}{2}}\| r^{\frac{1}{2}}u\|_{L^{\infty}(\mathbb{R}^3)}\|r \langle t-r\rangle^{-1}v\|_{L^{\infty}(\mathbb{R}^3)}. \end{align} Thanks to \eqref{Sobo2}, we can get \begin{align}\label{infwww89999}
\| r^{\frac{1}{2}}u\|_{L^{\infty}(\mathbb{R}^3)}\leq C\sum_{|\alpha|\leq 1}\|\nabla \Omega^{\alpha}u\|_{L^2(\mathbb{R}^3)}\leq CE_{2}^{\frac{1}{2}}(u(t)). \end{align} In view of \eqref{Sobo23333} and Hardy inequality \eqref{Hardy} for $n=3$, we obtain \begin{align}\label{infwww899999}
&\|r \langle t-r\rangle^{-1}v\|_{L^{\infty}(\mathbb{R}^3)}\nonumber\\
&\leq C\sum_{|\alpha|\leq 1}
\|\nabla (\langle t-r\rangle^{-1}\Omega^{\alpha}v)\|_{L^2(\mathbb{R}^3)}+C\sum_{|\alpha|\leq 2}\|\langle t-r\rangle^{-1} \Omega^{\alpha}v\|_{L^2(\mathbb{R}^3)}\nonumber\\
&\leq C\sum_{|\alpha|\leq 1}
\|\nabla \Omega^{\alpha}v\|_{L^2(\mathbb{R}^3)}+C\sum_{|\alpha|\leq 2}\|\langle t-r\rangle^{-1} \Omega^{\alpha}v\|_{L^2(\mathbb{R}^3)}\nonumber\\
&\leq C\sum_{|\alpha|\leq 2}\|\nabla \Omega^{\alpha}v\|_{L^2(\mathbb{R}^3)}\leq CE_{3}^{\frac{1}{2}}(v(t)). \end{align} The combination of \eqref{infwww899}, \eqref{infwww89999} and \eqref{infwww899999} gives \begin{align}
\|u \langle t-r\rangle^{-1}v\|_{L^{\infty}(r\geq t/4)} \leq C\langle t\rangle^{-\frac{3}{2}}E_{3}^{\frac{1}{2}}(u(t))E_{3}^{\frac{1}{2}}(v(t)). \end{align} The remaining task is to prove \begin{align}\label{scc89jggg}
\|u \langle t-r\rangle^{-1}v\|_{L^{\infty}(r\leq t/4)} \leq C\langle t\rangle^{-\frac{3}{2}}E_{3}^{\frac{1}{2}}(u(t))E_{3}^{\frac{1}{2}}(v(t)). \end{align} Take a smooth function $\chi$ satisfying \begin{align} \chi(\rho)= \begin{cases} 1,~\rho\leq \frac{1}{4},\\ 0,~\rho\geq \frac{1}{2}. \end{cases} \end{align} Then by Sobolev inequality \eqref{xuyooo} and Klainerman-Sobolev inequality \eqref{dfft6889} for $n=3$, we have \begin{align}\label{123456}
&\|u \langle t-r\rangle^{-1}v\|_{L^{\infty}(r\leq t/4)}\nonumber\\ &
\leq C\langle t\rangle^{-\frac{5}{2}}\|u\|_{L^{\infty}(\mathbb{R}^3)}\|\langle t+r\rangle\langle t-r\rangle^{\frac{1}{2}}\chi (r/t)v\|_{L^{\infty}(\mathbb{R}^3)}\nonumber\\
&\leq C\langle t\rangle^{-\frac{5}{2}}E_{2}^{\frac{1}{2}}(u(t))\|\chi (r/t)v\|_{\Gamma,2,2}. \end{align} Now we will prove \begin{align}\label{1fffff6}
\|\chi (r/t)v\|_{\Gamma,2,2}\leq CtE_{3}^{\frac{1}{2}}(v(t)). \end{align} Note that \begin{align} \partial_{t}\big(\chi (r/t)\big)=-\frac{r}{t^2}\chi' (r/t),~\partial_{i}\big(\chi (r/t)\big)=\frac{\omega_i}{t}\chi' (r/t), \end{align} and \begin{align} \Omega_{ij}\big(\chi (r/t)\big)=0, ~S\big(\chi (r/t)\big)=0, ~L_i\big(\chi (r/t)\big)=\omega_i(1-\frac{r^2}{t^2}) \chi' (r/t). \end{align} We have \begin{align}\label{hjidddddddi99}
\sum_{|b|=1}\|\Gamma^{b} \big(\chi (r/t)\big)\|_{L^2(\mathbb{R}^3)}\leq C. \end{align} Similarly, we also have \begin{align}\label{hj89njhuo0}
\sum_{|b|=2}\|\Gamma^b \big(\chi (r/t)\big)\|_{L^2(\mathbb{R}^3)}\leq C. \end{align} Thus by \eqref{hjidddddddi99}, \eqref{hj89njhuo0} and Hardy inequality \eqref{Hardy} for $n=3$, we have \begin{align}
&\|\chi (r/t)v\|_{\Gamma,2,2}\leq C\sum_{|b|+|c|\leq 2}\|\Gamma^{b}\big(\chi (r/t)\big)\Gamma^{c}v\|_{L^2(r\leq t/2)}\nonumber\\
&\leq C\sum_{|c|\leq 2}\|\Gamma^{c}v\|_{L^2(r\leq t/2)}\leq Ct\sum_{|c|\leq 2}\|\langle t-r\rangle^{-1}\Gamma^{c}v\|_{L^2(r\leq t/2)}\leq Ct E_{3}^{\frac{1}{2}}(v(t)). \end{align} The combination of \eqref{123456} and \eqref{1fffff6} gives \eqref{scc89jggg}. \end{proof} \begin{Lemma}\label{Hardynew2}
Assume that $u, v$ and $w$ are smooth functions supported in $|x|\leq t+1$. If the multi-indices $b,c,d$ satisfy $|b|+|c|+|d|\leq 6$, we have \begin{align}\label{hj8ddddddddcc899}
\|\Gamma^{b}uD\Gamma^{c}vD\Gamma^{d}w\|_{L^{2}(\mathbb{R}^3)}\leq C\langle t\rangle^{-\frac{3}{2}}E^{\frac{1}{2}}_7(u(t))E^{\frac{1}{2}}_7(v(t))E^{\frac{1}{2}}_7(w(t)). \end{align} \end{Lemma} \begin{proof}
If $|b|+|c|\leq 3$, it follows from Lemma \ref{daoshu} that \begin{align}
&\|\Gamma^{b}uD\Gamma^{c}vD\Gamma^{d}w\|_{L^{2}(\mathbb{R}^3)}\nonumber\\
&\leq \|\Gamma^{b}uD\Gamma^{c}v\|_{L^{\infty}(\mathbb{R}^3)}\|D\Gamma^{d}w\|_{L^{2}(\mathbb{R}^3)}\nonumber\\ &\leq
\|\Gamma^{b}uD\Gamma^{c}v\|_{L^{\infty}(\mathbb{R}^3)}E^{\frac{1}{2}}_7(w(t))\nonumber\\ &\leq C\langle t\rangle^{-\frac{3}{2}}E^{\frac{1}{2}}_7(u(t))E^{\frac{1}{2}}_7(v(t))E^{\frac{1}{2}}_7(w(t)). \end{align}
Using some similar procedure, if $|b|+|d|\leq 3$, we can also get \eqref{hj8ddddddddcc899}. If $|c|+|d|\leq 3$, by Hardy inequality \eqref{Hardy} for $n=3$, \eqref{gooddecay345} and Lemma \ref{daoshu} , we have \begin{align}
&\|\Gamma^{b}uD\Gamma^{c}vD\Gamma^{d}w\|_{L^{2}(\mathbb{R}^3)} \nonumber\\
&\leq \|\langle t-r\rangle^{-1}\Gamma^{b}u\|_{L^2(\mathbb{R}^3)} \|\langle t-r\rangle D\Gamma^{c}vD\Gamma^{d}w\|_{L^{\infty}(\mathbb{R}^3)} \nonumber\\
& \leq C\| \Gamma^{c+1}vD\Gamma^{d}w\|_{L^{\infty}(\mathbb{R}^3)}E^{\frac{1}{2}}_7(u(t)) \nonumber\\ &\leq C\langle t\rangle^{-\frac{3}{2}}E^{\frac{1}{2}}_7(u(t))E^{\frac{1}{2}}_7(v(t))E^{\frac{1}{2}}_7(w(t)). \end{align} \end{proof}
\begin{Lemma}\label{Hardynew2888}
Assume that $u, v$ and $w$ are smooth functions supported in $|x|\leq t+1$. If the multi-indices $b,c,d$ satisfy $|b|+|c|+|d|\leq 6, |d|\leq 5$, we have \begin{align}
\|\Gamma^{b}u \Gamma^{c}vD^2\Gamma^{d}w\|_{L^{2}(\mathbb{R}^3)}\leq C\langle t\rangle^{-\frac{3}{2}}E^{\frac{1}{2}}_7(u(t))E^{\frac{1}{2}}_7(v(t))E^{\frac{1}{2}}_7(w(t)). \end{align} \end{Lemma} \begin{proof}
If $|b|+|d|\leq 3$, it follows from Hardy inequality \eqref{Hardy} for $n=3$, \eqref{gooddecay345} and Lemma \ref{daoshu} that \begin{align}\label{rfg6888}
&\|\Gamma^{b}u \Gamma^{c}vD^2\Gamma^{d}w\|_{L^{2}(\mathbb{R}^3)}\nonumber\\
&\leq \|\Gamma^{b}u \langle t-r\rangle D^2\Gamma^{d}w\|_{L^{\infty}(\mathbb{R}^3)}\|\langle t-r\rangle^{-1}\Gamma^{c}v\|_{L^{2}(\mathbb{R}^3)} \nonumber\\
&\leq \|\Gamma^{b}u D\Gamma^{d+1}w\|_{L^{2}(\mathbb{R}^3)} E_{7}^{\frac{1}{2}}(v(t))\nonumber\\ & C\langle t\rangle^{-\frac{3}{2}}E^{\frac{1}{2}}_7(u(t))E^{\frac{1}{2}}_7(v(t))E^{\frac{1}{2}}_7(w(t)). \end{align}
Using similar procedure, we can also treat the case $|c|+|d|\leq 3$. If $|b|+|c|\leq 3$, by \eqref{gooddecay345} and Lemma \ref{benshen}, we have \begin{align}\label{rfg6889998}
&\|\Gamma^{b}u \Gamma^{c}vD^2\Gamma^{d}w\|_{L^{2}(\mathbb{R}^3)}\nonumber\\
&\leq \|\Gamma^{b}u \langle t-r\rangle^{-1}\Gamma^{c}v\|_{L^{\infty}(\mathbb{R}^3)}\|\langle t-r\rangle D^2\Gamma^{d}w\|_{L^{2}(\mathbb{R}^3)} \nonumber\\
&\leq C\|\Gamma^{b}u \langle t-r\rangle^{-1}\Gamma^{c}v\|_{L^{\infty}(\mathbb{R}^3)} E_{7}^{\frac{1}{2}}(w(t))\nonumber\\ & \leq C\langle t\rangle^{-\frac{3}{2}}E^{\frac{1}{2}}_7(u(t))E^{\frac{1}{2}}_7(v(t))E^{\frac{1}{2}}_7(w(t)). \end{align} \end{proof} We also have the estimate of composite functions in Li and Zhou \cite{MR3729480} as follows. \begin{Lemma}\label{composite} Suppose that $H=H(w)$ is a sufficiently smooth function of $w$ with \begin{align}
H(w)=\mathscr{O}(|w|^{1+\beta}), \end{align} where $\beta\geq 0$ is an integer. For any given multi-index $a$, if a function $w=w(t,x)$ satisfies \begin{align}\label{fty}
\|w(t,\cdot)\|_{\Gamma,[\frac{|a|}{2}],\infty}\leq \nu_0, \end{align} where $\nu_0$ is a positive constant, then we have the following pointwise estimate \begin{align}\label{point333}
|\Gamma^{a}H(w(t,x))|\leq C(\nu_0)\sum_{|l_0|+\cdots+|l_{\beta}|\leq |a|}
\prod_{j=0}^{\beta}|\Gamma^{l_{j}}w(t,x)|.
\end{align}
and $C(\nu_0)$ is a positive constant only depending on $\nu_0$. \end{Lemma}
\section{Proof of Theorem \ref{mainthm}} In this section, we shall prove Theorem \ref{mainthm}, i.e., the global nonlinear stability theorem of geodesic solutions for evolutionary Faddeev model when $n=3$, by some bootstrap argument. Assume that $(u,v)$ is a local classical solution to the Cauchy problem \eqref{syst3}--\eqref{xuyaoop} on $[0, T]$. We will prove that there exist positive constants $A$ and $\varepsilon_0$ such that \begin{align} \sup_{0\leq t\leq T}\big(E_{7}^{\frac{1}{2}}(u(t))+E_{7}^{\frac{1}{2}}(v(t))\big)\leq A\varepsilon \end{align} under the assumption \begin{align} \sup_{0\leq t\leq T}\big(E_{7}^{\frac{1}{2}}(u(t))+E_{7}^{\frac{1}{2}}(v(t))\big)\leq 2A\varepsilon, \end{align} where $0<\varepsilon\leq \varepsilon_0$. \subsection{Energy estimates} First we will give the estimates on energies $E_{7}(u(t))$ and $E_{7}(v(t))$. For this purpose, it is necessary to introduce some notations about the nonlinear terms on the right hand side of \eqref{syst3}, which will be also used when $n=2$. Denote \begin{align}\label{333333000} &F(u+\Theta, D(u+\Theta), Dv, D^2(u+\Theta), D^2v)\nonumber\\ &=a_{\mu\nu}(u+\Theta, Dv)\partial_{\mu}\partial_{\nu}(u+\Theta)+b_{\mu\nu}(u+\Theta, D(u+\Theta), Dv)\partial_{\mu}\partial_{\nu}v\nonumber\\
&~~+F_1(u+\Theta, D(u+\Theta), Dv), \end{align} where \begin{align}\label{PF00000} &a_{\mu\nu}(u+\Theta, Dv)\partial_{\mu}\partial_{\nu}(u+\Theta)+b_{\mu\nu}(u+\Theta, D(u+\Theta), Dv)\partial_{\mu}\partial_{\nu}v\nonumber\\ &= -\frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(v,Q^{\mu\nu}(u+\Theta,v)\big) \end{align} and \begin{align}\label{PF11111} &F_1(u+\Theta, D(u+\Theta), Dv)\nonumber\\ &=-\frac{1}{2}\sin(2(u+\Theta))Q(v,v)-\frac{1}{4}\sin(2(u+\Theta))Q_{\mu\nu}(u+\Theta,v)Q^{\mu\nu}(u+\Theta,v). \end{align} We also denote \begin{align} &G(u+\Theta, D(u+\Theta), Dv, D^2(u+\Theta), D^2v)\nonumber\\ &=c_{\mu\nu}(u+\Theta, Dv)\partial_{\mu}\partial_{\nu}(u+\Theta)+d_{\mu\nu}(u+\Theta, D(u+\Theta), Dv)\partial_{\mu}\partial_{\nu}v\nonumber\\
&~~+G_1(u+\Theta, D(u+\Theta), Dv), \end{align} where \begin{align}\label{PF22222} &c_{\mu\nu}(u+\Theta,D(u+\Theta), Dv)\partial_{\mu}\partial_{\nu}(u+\Theta)+d_{\mu\nu}(u+\Theta, D(u+\Theta), Dv)\partial_{\mu}\partial_{\nu}v\nonumber\\ &= \sin^2(u+\Theta) \Box v+\frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(u+\Theta,Q^{\mu\nu}(u+\Theta,v)\big) \end{align} and \begin{align}\label{PF1dddd1111} G_1(u+\Theta, D(u+\Theta), Dv) =\sin(2(u+\Theta))Q(u+\Theta,v). \end{align}
\par
For any multi-index $a, |a|\leq 6$, taking $\Gamma^{a}$ on the equation \eqref{syst3} and noting Lemma \ref{LEM2134}, we have \begin{align}\label{rule10} \Box \Gamma^{a}u&=-\frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(v,Q^{\mu\nu}(\Gamma^au+\Gamma^a\Theta,v)\big)\nonumber\\ &~~~-\frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(v,Q^{\mu\nu}(u+\Theta,\Gamma^av)\big)+f_{a} \end{align} and \begin{align}\label{rule20} \Box \Gamma^{a}v&=\sin^2(u+\Theta) \Box \Gamma^{a}v+\frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(u+\Theta,Q^{\mu\nu}(\Gamma^{a}u+\Gamma^{a}\Theta,v)\big)\nonumber\\ &~~~+\frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(u+\Theta,Q^{\mu\nu}(u+\Theta,\Gamma^{a}v)\big)+g_{a}, \end{align} where \begin{align}\label{faaaa} &f_a=\big[\Gamma^{a}, a_{\mu\nu}(u+\Theta, Dv)\partial_{\mu}\partial_{\nu}\big](u+\Theta)+\big[\Gamma^a,b_{\mu\nu}(u+\Theta, D(u+\Theta), Dv)\partial_{\mu}\partial_{\nu}\big]v\nonumber\\ &~~~~+\Gamma^{a}F_1(u+\Theta, D(u+\Theta), Dv)+[\Box, \Gamma^{a}]u \end{align} and \begin{align}\label{fdddda} &g_a=\big[\Gamma^{a}, c_{\mu\nu}(u+\Theta,D(u+\Theta), Dv)\partial_{\mu}\partial_{\nu}\big](u+\Theta)+\big[\Gamma^a,d_{\mu\nu}(u+\Theta, D(u+\Theta), Dv)\partial_{\mu}\partial_{\nu}\big]v\nonumber\\ &~~~~+\Gamma^{a}G_1(u+\Theta, D(u+\Theta), Dv)+[\Box, \Gamma^{a}]v. \end{align}
\par By Leibniz's rule, we have \begin{align}\label{rule30}
\langle \partial_t \Gamma^{a}u, \Box \Gamma^{a}u\rangle+ \langle \partial_t \Gamma^{a}v, \Box \Gamma^{a}v\rangle=\partial_te_0+\nabla\cdot q_0, \end{align} where \begin{align}
e_0=\frac{1}{2}\big(|D \Gamma^{a}u|^2+|D \Gamma^{a}v|^2\big). \end{align} Leibniz's rule also gives \begin{align}\label{rule40} &\langle \partial_t \Gamma^{a}u, -\frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(v,Q^{\mu\nu}(\Gamma^au+\Gamma^a\Theta,v)\big)\rangle\nonumber\\ &+ \langle \partial_t \Gamma^{a}u, -\frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(v,Q^{\mu\nu}(u+\Theta,\Gamma^av)\big)\rangle\nonumber\\ &+ \langle \partial_t \Gamma^{a}v, \sin^2(u+\Theta) \Box \Gamma^{a}v\rangle\nonumber\\ &+ \langle \partial_t \Gamma^{a}v, \frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(u+\Theta,Q^{\mu\nu}(\Gamma^au+\Gamma^a\Theta,v)\big)\rangle\nonumber\\ &+ \langle \partial_t \Gamma^{a}v, \frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(u+\Theta,Q^{\mu\nu}(u+\Theta,\Gamma^av)\big)\rangle\nonumber\\ &=\partial_t\widetilde{e}+\nabla\cdot \widetilde{q}+\widetilde{p}, \end{align} where
\begin{align} \widetilde{ e}=\widetilde{e}_0+e_1
\end{align}
with
\begin{align}
\widetilde{e}_0&=\frac{1}{2} \sin^2(u+\Theta)|D \Gamma^{a}v|^2+ \cos^2(u+\Theta)\partial_t \Gamma^{a}v\partial_{\mu}\Theta Q^{\mu0}(\Theta,\Gamma^av)\nonumber\\ &-\frac{1}{4}\cos^2(u+\Theta)Q_{\mu\nu}(\Theta,\Gamma^{a}v)Q^{\mu\nu}(\Theta,\Gamma^av) \end{align} and \begin{align}\label{89jjyu6532} e_1&=-\cos^2(u+\Theta)\partial_t \Gamma^{a}u\partial_{\mu}v\big(Q^{\mu0}(\Gamma^au,v)+Q^{\mu0}(u+\Theta,\Gamma^av)\big)\nonumber\\ &+\cos^2(u+\Theta)\partial_t \Gamma^{a}v\partial_{\mu}(u+\Theta)\big(Q^{\mu0}(\Gamma^au,v)+Q^{\mu0}(u,\Gamma^av)\big)\nonumber\\ &+\cos^2(u+\Theta)\partial_t \Gamma^{a}v\partial_{\mu}uQ^{\mu0}(\Theta,\Gamma^av)\nonumber\\ &+\frac{1}{4}\cos^2(u+\Theta)Q_{\mu\nu}(v,\Gamma^au)\big(Q^{\mu\nu}(\Gamma^au,v)+2Q^{\mu\nu}(u+\Theta,\Gamma^av)\big)\nonumber\\ &-\frac{1}{4}\cos^2(u+\Theta)Q_{\mu\nu}(u+2\Theta,\Gamma^{a}v)Q^{\mu\nu}(u,\Gamma^av),
\end{align} and \begin{align}\label{with00000} \widetilde{p}= &-\frac{1}{2}\sin(2(u+\Theta))\big(\partial_t \Gamma^{a}vQ(u+\Theta,\Gamma^{a}v)+\partial_i \Gamma^{a}vQ_{0i}(u+\Theta,\Gamma^{a}v)\big)\nonumber\\ &+\frac{1}{2}\cos^2(u+\Theta)\partial_t \Gamma^{a}v Q_{\mu\nu} \big(u+\Theta,Q^{\mu\nu}(\Gamma^a\Theta,v)\big)\nonumber\\ &+\frac{1}{2}\cos^2(u+\Theta)Q_{\mu\nu}(\partial_tu+\partial_t\Theta,\Gamma^{a}v)Q^{\mu\nu}(u+\Theta,\Gamma^av)\nonumber\\ &-\frac{1}{4}\sin(2(u+\Theta))Q_{\mu\nu}(u+\Theta,\Gamma^{a}v)Q^{\mu\nu}(u+\Theta,\Gamma^av)\nonumber\\ & -\frac{1}{2}\cos^2(u+\Theta)Q_{\mu\nu}(\partial_tv,\Gamma^au)Q^{\mu\nu}(\Gamma^au,v)\nonumber\\ & -\frac{1}{2}\cos^2(u+\Theta)\partial_t \Gamma^{a}uQ_{\mu\nu} \big(v,Q^{\mu\nu}(\Gamma^a\Theta,v)\big)\nonumber\\ &-\frac{1}{2}\sin(2(u+\Theta))\partial_t \Gamma^{a}uQ_{\mu\nu}(v,u+\Theta)Q^{\mu\nu}(\Gamma^au,v)\nonumber\\ &+\frac{1}{4}\sin(2(u+\Theta))(\partial_tu+\partial_t\Theta)Q_{\mu\nu}(v,\Gamma^au)Q^{\mu\nu}(\Gamma^au,v)\nonumber\\ &-\frac{1}{2}\cos^2(u+\Theta)Q_{\mu\nu}(\partial_tv, \Gamma^au)Q^{\mu\nu}(u+\Theta,\Gamma^av)\nonumber\\ &-\frac{1}{2}\cos^2(u+\Theta)Q_{\mu\nu}(v, \Gamma^au)Q^{\mu\nu}(\partial_tu+\partial_t\Theta,\Gamma^av)\nonumber\\ &-\frac{1}{2}\sin(2(u+\Theta))\partial_t \Gamma^{a}uQ_{\mu\nu}(v,u+\Theta)Q^{\mu\nu}(u+\Theta,\Gamma^av)\nonumber\\ &+\frac{1}{2}\sin(2(u+\Theta))(\partial_tu+\partial_t\Theta)Q_{\mu\nu}(v,\Gamma^{a}u)Q^{\mu\nu}(u+\Theta,\Gamma^av). \end{align} By \eqref{rule10}, \eqref{rule20}, \eqref{rule30}, \eqref{rule40} and the divergence theorem, we can get \begin{align}\label{rule50} &\frac{d}{dt}\int_{\mathbb{R}^3}\big(e_0(t,x)-\widetilde{ e}(t,x)\big) dx\nonumber\\
&\leq \int_{\mathbb{R}^3}|\widetilde{p}(t,x)| dx+\int_{\mathbb{R}^3}|\langle \partial_t \Gamma^{a}u, f_a\rangle| dx+\int_{\mathbb{R}^3}|\langle \partial_t \Gamma^{a}v, g_a\rangle| dx. \end{align} Noting \begin{align} \widetilde{e}_0
&=\frac{1}{2} \sin^2(u+\Theta)|D \Gamma^{a}v|^2\nonumber\\ &+\frac{1}{2}\cos^2(u+\Theta)\big(2\partial_t \Gamma^{a}v\partial_{\mu}\Theta Q^{\mu0}(\Theta,\Gamma^av)-\frac{1}{2}Q_{\mu\nu}(\Theta,\Gamma^{a}v)Q^{\mu\nu}(\Theta,\Gamma^av)\big)\nonumber\\
&=\frac{1}{2} \sin^2(u+\Theta)|D \Gamma^{a}v|^2\nonumber\\
&+\frac{1}{2}\cos^2(u+\Theta)\big(|\partial_t \Theta|^2|\nabla \Gamma^{a}v|^2-|\nabla \Theta|^2|\partial_t \Gamma^{a}v|^2-\frac{1}{2}Q_{ij}(\Theta,\Gamma^{a}v)Q_{ij}(\Theta,\Gamma^{a}v)\big)\nonumber\\
&=\frac{1}{2} \sin^2(u+\Theta)|D \Gamma^{a}v|^2+\frac{1}{2}\cos^2(u+\Theta)\big(Q(\Theta,\Theta)|\nabla \Gamma^{a}v|^2+(\nabla \Theta\cdot \nabla \Gamma^{a}v)^2\big), \end{align} we have \begin{align}\label{89iytr9900} &e_0-\widetilde{e}_0\nonumber\\
&=\frac{1}{2}|D \Gamma^{a}u|^2+\frac{1}{2}\cos^2(u+\Theta)|\partial_t \Gamma^{a}v|^2\nonumber\\
&+\frac{1}{2}\cos^2(u+\Theta)\big((1-Q(\Theta,\Theta))|\nabla \Gamma^{a}v|^2-(\nabla \Theta\cdot \nabla \Gamma^{a}v)^2\big)\nonumber\\
&\geq \frac{1}{2}|D \Gamma^{a}u|^2+\frac{1}{2}\cos^2(u+\Theta)|\partial_t \Gamma^{a}v|^2+\frac{1}{2}\cos^2(u+\Theta)(1-|\partial_t\Theta|^2)|\nabla \Gamma^{a}v|^2. \end{align}
In view of \eqref{89iytr9900} and \eqref{89jjyu6532}, it follows from Remark \ref{rem45888} and the smallness of $|u|, |Du|$ and $|Dv|$ that there exists a positive constant $c_1=c_1(\lambda_0,\lambda_1)$ such that \begin{align}\label{rtt5666666699kkk8} c_1^{-1}e_0\leq e_0-\widetilde{e}=e_0-\widetilde{e}_0-e_1 \leq c_1e_0. \end{align} \par Now we estimate all the terms on the right hand side of \eqref{rule50}. In view of \eqref{with00000}, we have \begin{align}\label{bianjie}
\int_{\mathbb{R}^3}|\widetilde{p}(t,x)| dx
&\leq C\|( |u|+|\Theta|)(|Du|+|D\Theta|)|D\Gamma^{a}v|^2 \|_{L^1(\mathbb{R}^3)}\nonumber\\
&+C\|\big( |Du|+|D\Theta|\big)\big( |D^2u|+|D^2\Theta|\big)|D\Gamma^{a}v|^2 \|_{L^1(\mathbb{R}^3)}\nonumber\\
&+C \|( |Du|+|D\Theta|\big)DvD^2\Gamma^{a}\Theta D\Gamma^{a}v \|_{L^1(\mathbb{R}^3)}\nonumber\\
&+C \|( |Du|+|D\Theta|\big)D^2vD\Gamma^{a}\Theta D\Gamma^{a}v \|_{L^1(\mathbb{R}^3)}\nonumber\\
&+C \| DvDvD^2\Gamma^{a}\Theta D\Gamma^{a}u \|_{L^1(\mathbb{R}^3)}+C \| DvD^2vD\Gamma^{a}\Theta D\Gamma^{a}u \|_{L^1(\mathbb{R}^3)}\nonumber\\
&+C\|Dv(|D\Theta|+|Dv|+|D^2v|)|D\Gamma^{a}u|^2\|_{L^1(\mathbb{R}^3)}\nonumber\\
&+C\|Dv(|D\Theta|+|Du|)D\Gamma^{a}uD\Gamma^{a}v\|_{L^1(\mathbb{R}^3)}\nonumber\\
&+C\|Dv(|D^2\Theta|+|D^2u|)D\Gamma^{a}uD\Gamma^{a}v\|_{L^1(\mathbb{R}^3)}\nonumber\\
&+C\|D^2v(|D\Theta|+|Du|)D\Gamma^{a}uD\Gamma^{a}v\|_{L^1(\mathbb{R}^3)}. \end{align} For the terms on the right hand side of \eqref{bianjie}, the first term is most important. By Lemma \ref{Hardynew2}, we have \begin{align}
&\|( |u|+|\Theta|)(|Du|+|D\Theta|)|D\Gamma^{a}v|^2 \|_{L^1(\mathbb{R}^3)}\nonumber\\
&\leq \|( |u|+|\Theta|)(|Du|+|D\Theta|)D\Gamma^{a}v\|_{L^2(\mathbb{R}^3)}\|D\Gamma^{a}v \|_{L^2(\mathbb{R}^3)}\nonumber\\ &\leq C\langle t\rangle^{-\frac{3}{2}} \big(E_7(\Theta(t))+E_7(u(t)\big)E_7(v(t)). \end{align} By Klainerman-Sobolev inequality \eqref{Sobo},
we can also get that the remaining terms on the right hand side of \eqref{bianjie} can be controlled by
\begin{align}
\langle t\rangle^{-\frac{3}{2}} \big(E_8(\Theta(t))+E_7(u(t)+E_7(v(t)\big)\big(E_7(u(t))+E_7(v(t))\big).
\end{align} Therefore, \eqref{bianjie} can be estimated as \begin{align}\label{biaddddhhhhnjie}
\|\widetilde{p}(t,\cdot)\|_{L^1(\mathbb{R}^3)}\leq C \langle t\rangle^{-\frac{3}{2}} \big(E_8(\Theta(t))+E_7(u(t)+E_7(v(t)\big)\big(E_7(u(t))+E_7(v(t))\big). \end{align} By the energy estimate of \eqref{xianxingeeeee}, and noting \eqref{hju7899} and \eqref{hu788}, we can get \begin{align}\label{7890hjy65}
E^{\frac{1}{2}}_8(\Theta(t))\leq C\big(\|\Theta_0\|_{H^{8}(\mathbb{R}^3)}+\|\Theta_1\|_{H^{7}(\mathbb{R}^3)}\big)\leq C\lambda. \end{align}
\par In the following, we will estimate $\|\partial_t \Gamma^{a}uf_a\|_{L^1(\mathbb{R}^3)}$ and $\|\partial_t \Gamma^{a}vg_a\|_{L^1(\mathbb{R}^3)}.$ It is obvious that \begin{align}
&\|\partial_t \Gamma^{a}uf_a\|_{L^1(\mathbb{R}^3)}+\|\partial_t \Gamma^{a}vg_a\|_{L^1(\mathbb{R}^3)}\nonumber\\
&\leq \big({E}_7^{\frac{1}{2}}(u(t))+ {E}_7^{\frac{1}{2}}(v(t))\big)\big(\|f_a\|_{L^2(\mathbb{R}^3)}+\|g_a\|_{L^2(\mathbb{R}^3)}\big) \end{align} and \begin{align}\label{fgtyyyeee00}
&\|f_a\|_{L^2(\mathbb{R}^3)}+\|g_a\|_{L^2(\mathbb{R}^3)}\nonumber\\
&\leq \|\Gamma^{a}F_1(u+\Theta, D(u+\Theta), Dv)\|_{L^2(\mathbb{R}^3)}+\| \Gamma^{a}G_1(u+\Theta, D(u+\Theta), Dv)\|_{L^2(\mathbb{R}^3)}\nonumber\\ &+
\|\big[\Gamma^{a}, a_{\mu\nu}(u+\Theta, Dv)\partial_{\mu}\partial_{\nu}\big](u+\Theta)\|_{L^2(\mathbb{R}^3)}+\|\big[\Gamma^a,b_{\mu\nu}(u+\Theta, D(u+\Theta), Dv)\partial_{\mu}\partial_{\nu}\big]v\|_{L^2(\mathbb{R}^3)}\nonumber\\ &+
\|\big[\Gamma^{a}, c_{\mu\nu}(u+\Theta,D(u+\Theta), Dv)\partial_{\mu}\partial_{\nu}\big](u+\Theta)\|_{L^2(\mathbb{R}^3)}\nonumber\\
&+\|\big[\Gamma^a,d_{\mu\nu}(u+\Theta, D(u+\Theta), Dv)\partial_{\mu}\partial_{\nu}\big]v\|_{L^2(\mathbb{R}^3)}+\|[\Box, \Gamma^{a}]u\|_{L^2(\mathbb{R}^3)}+\|[\Box, \Gamma^{a}]v\|_{L^2(\mathbb{R}^3)}. \end{align} In view of \eqref{PF00000}, \eqref{PF11111}, \eqref{PF22222} and \eqref{PF1dddd1111}, for the terms containing on the right hand side of \eqref{fgtyyyeee00}, we will only focus on the estimates of the following ones \begin{align}\label{fgtyyyeee8899900888}
&\|\sin(2(u+\Theta))Q(v,v)\|_{\Gamma,6,2}+\|\sin(2(u+\Theta))Q(u+\Theta,v)\|_{\Gamma,6,2}\nonumber\\
&~~~~~~~~~~~~~~~~~~~~+\sum_{\substack{|\beta|+|d|\leq 6\\ |d|\leq 5}}\|\Gamma^{\beta}\sin^2(2(u+\Theta))\Box\Gamma^{d}v\|_{L^2(\mathbb{R}^3)}. \end{align}
The remaining terms can be treated similarly. \par
It follows from Lemma \ref{composite} and Lemma \ref{Hardynew2} that \begin{align}\label{5677700}
&\|\sin(2(u+\Theta))Q(v,v)\|_{\Gamma,6,2}\nonumber\\
&\leq C\sum_{|b|+|\beta|\leq 6}\|\Gamma^{b}\sin(2(u+\Theta))\Gamma^{\beta}Q(v,v)\|_{L^2(\mathbb{R}^3)}\nonumber\\
&\leq C\sum_{|b|+|c|+|d|\leq 6}\|\Gamma^{b}(u+\Theta)D\Gamma^{c}vD\Gamma^{d}v\|_{L^2(\mathbb{R}^3)}\nonumber\\ &\leq C\langle t\rangle^{-\frac{3}{2}}\big(E_{7}^{\frac{1}{2}}(u(t))+E^{\frac{1}{2}}_{7}(\Theta(t))\big)E(v(t)). \end{align} Similarly, we also have \begin{align}\label{56777fff00}
&\|\sin(2(u+\Theta))Q(u+\Theta,v)\|_{\Gamma,6,2}\nonumber\\
&\leq C\sum_{|b|+|\beta|\leq 6}\|\Gamma^{b}\sin(2(u+\Theta))\Gamma^{\beta}Q(u+\Theta,v)\|_{L^2(\mathbb{R}^3)}\nonumber\\
&\leq C\sum_{|b|+|c|+|d|\leq 6}\|\Gamma^{b}(u+\Theta)D\Gamma^{c}(u+\Theta)D\Gamma^{d}v\|_{L^2(\mathbb{R}^3)}\nonumber\\ &\leq C\langle t\rangle^{-\frac{3}{2}}\big(E_{7}(u(t))+E_{7}(\Theta(t))\big)E^{\frac{1}{2}}(v(t)). \end{align} By Lemma \ref{composite} and Lemma \ref{Hardynew2888}, we have \begin{align}
&\sum_{\substack{|\beta|+|d|\leq 6\\ |d|\leq 5}}\|\Gamma^{\beta}\sin^2(2(u+\Theta))\Box\Gamma^{d}v\|_{L^2(\mathbb{R}^3)}\nonumber\\
&\leq C\sum_{\substack{|b|+|c|+|d|\leq 6\\|d|\leq 5}}\|\Gamma^{b}(u+\Theta)\Gamma^{c}(u+\Theta)D^2\Gamma^{d}v\|_{L^2(\mathbb{R}^3)}\nonumber\\ &\leq C\langle t\rangle^{-\frac{3}{2}}\big(E_{7}(u(t))+E_{7}(\Theta(t))\big)E^{\frac{1}{2}}(v(t)). \end{align} From the above discussion, we obtain \begin{align}\label{xuyoa99088}
&\|\partial_t \Gamma^{a}uf_a\|_{L^1(\mathbb{R}^3)}+\|\partial_t \Gamma^{a}vg_a\|_{L^1(\mathbb{R}^3)}\nonumber\\ &\leq \langle t\rangle^{-\frac{3}{2}} \big(E_8(\Theta(t))+E_7(u(t)+E_7(v(t)\big)\big(E_7(u(t))+E_7(v(t))\big). \end{align} \par Thanks to \eqref{rule50}, \eqref{rtt5666666699kkk8}, \eqref{biaddddhhhhnjie}, \eqref{7890hjy65} and \eqref{xuyoa99088}, we get \begin{align} &E_7(u(t))+E_7(v(t))\nonumber\\ &\leq C\varepsilon^2+C\int_0^{t} \langle \tau\rangle^{-\frac{3}{2}} \big(E_7(u(\tau)+E_7(v(\tau)\big)^2d\tau\nonumber\\ &~~~~~~+C\int_0^{t} \langle \tau\rangle^{-\frac{3}{2}}E_8(\Theta(\tau))\big(E_7(u(\tau))+E_7(v(\tau))\big)dt\nonumber\\ &\leq C\varepsilon^2+16CA^4\varepsilon^4+C\int_0^{t} \langle \tau\rangle^{-\frac{3}{2}}\big(E_7(u(\tau))+E_7(v(\tau))\big)dt. \end{align} By Gronwall's inequality, we have \begin{align}\label{hjii908ggg8} {E}^{\frac{1}{2}}_{7}(u(t)) +{E}^{\frac{1}{2}}_{7}(v(t))\leq C_0\varepsilon+4C_0A^2\varepsilon^2. \end{align} \subsection{Conclusion of the proof} Noting \eqref{hjii908ggg8}, we have obtained \begin{align} \sup_{0\leq t\leq T}\big({E}^{\frac{1}{2}}_{7}(u(t)) +{E}^{\frac{1}{2}}_{7}(v(t))\big)\leq C_0\varepsilon+4C_0A^2\varepsilon^2. \end{align} Assume that \begin{align} E^{\frac{1}{2}}_7(u(0))+E^{\frac{1}{2}}_7(v(0))\leq \widetilde{C}_0\varepsilon^2. \end{align} Take $A=\max\{4C_0,4 \widetilde{C}_0\}$ and $\varepsilon_0$ sufficiently small such that \begin{align} 16C_0A\varepsilon\leq 1. \end{align} Then for any $0<\varepsilon\leq \varepsilon_0$, we have \begin{align} \sup_{0\leq t\leq T}\big({E}^{\frac{1}{2}}_{7}(u(t)) +{E}^{\frac{1}{2}}_{7}(v(t))\big)\leq A\varepsilon, \end{align} which completes the proof of Theorem \ref{mainthm}. \section{Proof of Theorem \ref{mainthm2}} In this section, we will prove Theorem \ref{mainthm2}, i.e., the global nonlinear stability theorem of geodesic solutions for evolutionary Faddeev model when $n=2$, by some suitable bootstrap argument. We note that in the proof of Theorem \ref{mainthm}, i.e., the global nonlinear stability theorem of geodesic solutions for evolutionary Faddeev model when $n=3$, only the energy estimate is used and the null structure of the system \eqref{syst3} is not employed. The $n=2$ case is much more complicated since the slower decay in time. In order to prove Theorem \ref{mainthm2}, we will exploit the null structure of the system \eqref{syst3} in energy estimates by using Alinhac's ghost weight energy method. To get enough decay rate, we will also use H\"{o}rmander's $L^1$--$L^{\infty}$ estimates, in which the null structure will be also employed. The common feature in the using of these estimates is the sufficient utilization of decay in $\langle t-r\rangle$, besides in $\langle t\rangle$. Some similar idea can be also found in Zha \cite{MR3912654}, which is partially inspired by Alinhac \cite{MR1856402} and Katayama \cite{MR1371789}. \par Assume that $(u,v)$ is a local classical solution to Cauchy problem \eqref{syst3}--\eqref{xuyaoop} on $[0, T]$. We will prove that there exist positive constants $A_1, A_2$ and $\varepsilon_0$ such that \begin{align} \sup_{0\leq t\leq T}\big(E_{7}^{\frac{1}{2}}(u(t))+E_{7}^{\frac{1}{2}}(v(t))\big)\leq A_1\varepsilon~~\text{and}~~\sup_{0\leq t\leq T}\big(\mathcal {X}_{4}(u(t))+\mathcal {X}_{4}(v(t))\big)\leq A_2\varepsilon \end{align} under the assumption \begin{align} \sup_{0\leq t\leq T}\big(E_{7}^{\frac{1}{2}}(u(t))+E_{7}^{\frac{1}{2}}(v(t))\big)\leq 2A_1\varepsilon~~\text{and}~~\sup_{0\leq t\leq T}\big(\mathcal {X}_{4}(u(t))+\mathcal {X}_{4}(v(t))\big)\leq 2A_2\varepsilon, \end{align} where $0<\varepsilon\leq \varepsilon_0$. \subsection{Energy estimates} In this subsection, we will first give the estimates on the energies $E_{7}(u(t))$ and $E_{7}(v(t))$.
Similarly to the 3-D case, thanks to Lemma \ref{LEM2134}, for any multi-index $a, |a|\leq 6$, we have \begin{align}\label{rule1} \Box \Gamma^{a}u&=-\frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(v,Q^{\mu\nu}(\Gamma^au+\Gamma^a\Theta,v)\big)\nonumber\\ &~~~-\frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(v,Q^{\mu\nu}(u+\Theta,\Gamma^av)\big)+f_{a} \end{align} and \begin{align}\label{rule2} \Box \Gamma^{a}v&=\sin^2(u+\Theta) \Box \Gamma^{a}v+\frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(u+\Theta,Q^{\mu\nu}(\Gamma^{a}u+\Gamma^{a}\Theta,v)\big)\nonumber\\ &~~~+\frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(u+\Theta,Q^{\mu\nu}(u+\Theta,\Gamma^{a}v)\big)+g_{a}, \end{align} where $f_a$ and $g_a$ are defined through \eqref{faaaa} and \eqref{fdddda}.
\par By Leibniz's rule, we have \begin{align}\label{rule3}
\langle e^{-q(\sigma)}\partial_t \Gamma^{a}u, \Box \Gamma^{a}u\rangle+ \langle e^{-q(\sigma)}\partial_t \Gamma^{a}v, \Box \Gamma^{a}v\rangle=\partial_te_0+\nabla\cdot q_0+p_0, \end{align} where \begin{align}
e_0=\frac{1}{2}e^{-q(\sigma)}\big(|D \Gamma^{a}u|^2+|D \Gamma^{a}v|^2\big) \end{align} and \begin{align}
p_0=\frac{1}{2}e^{-q(\sigma)} q'(\sigma)\big(|T\Gamma^{a}u|^2+|T\Gamma^{a}v|^2\big). \end{align} Leibniz's rule also gives \begin{align}\label{rule4} &\langle e^{-q(\sigma)}\partial_t \Gamma^{a}u, -\frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(v,Q^{\mu\nu}(\Gamma^au+\Gamma^a\Theta,v)\big)\rangle\nonumber\\ &+ \langle e^{-q(\sigma)}\partial_t \Gamma^{a}u, -\frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(v,Q^{\mu\nu}(u+\Theta,\Gamma^av)\big)\rangle\nonumber\\ &+ \langle e^{-q(\sigma)}\partial_t \Gamma^{a}v, \sin^2(u+\Theta) \Box \Gamma^{a}v\rangle\nonumber\\ &+ \langle e^{-q(\sigma)}\partial_t \Gamma^{a}v, \frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(u+\Theta,Q^{\mu\nu}(\Gamma^au+\Gamma^a\Theta,v)\big)\rangle\nonumber\\ &+ \langle e^{-q(\sigma)}\partial_t \Gamma^{a}v, \frac{1}{2}\cos^2(u+\Theta) Q_{\mu\nu} \big(u+\Theta,Q^{\mu\nu}(u+\Theta,\Gamma^av)\big)\rangle\nonumber\\ &=\partial_t\widetilde{e}+\nabla\cdot \widetilde{q}+\widetilde{p}, \end{align} where
\begin{align} \widetilde{ e}=\widetilde{e}_0+e_1
\end{align}
with
\begin{align}
\widetilde{e}_0&=\frac{1}{2} e^{-q(\sigma)}\sin^2(u+\Theta)|D \Gamma^{a}v|^2\nonumber\\ &+ e^{-q(\sigma)}\cos^2(u+\Theta)\partial_t \Gamma^{a}v\partial_{\mu}\Theta Q^{\mu0}(\Theta,\Gamma^av)\nonumber\\ &-\frac{1}{4}e^{-q(\sigma)}\cos^2(u+\Theta)Q_{\mu\nu}(\Theta,\Gamma^{a}v)Q^{\mu\nu}(\Theta,\Gamma^av) \end{align} and \begin{align}\label{e111990hjuu} e_1&=-e^{-q(\sigma)}\cos^2(u+\Theta)\partial_t \Gamma^{a}u\partial_{\mu}v\big(Q^{\mu0}(\Gamma^au,v)+Q^{\mu0}(u+\Theta,\Gamma^av)\big)\nonumber\\ &+e^{-q(\sigma)}\cos^2(u+\Theta)\partial_t \Gamma^{a}v\partial_{\mu}(u+\Theta)\big(Q^{\mu0}(\Gamma^au,v)+Q^{\mu0}(u,\Gamma^av)\big)\nonumber\\ &+e^{-q(\sigma)}\cos^2(u+\Theta)\partial_t \Gamma^{a}v\partial_{\mu}uQ^{\mu0}(\Theta,\Gamma^av)\nonumber\\ &+\frac{1}{4}e^{-q(\sigma)}\cos^2(u+\Theta)Q_{\mu\nu}(v,\Gamma^au)\big(Q^{\mu\nu}(\Gamma^au,v)+2Q^{\mu\nu}(u+\Theta,\Gamma^av)\big)\nonumber\\ &-\frac{1}{4}e^{-q(\sigma)}\cos^2(u+\Theta)Q_{\mu\nu}(u+2\Theta,\Gamma^{a}v)Q^{\mu\nu}(u,\Gamma^av),
\end{align} and \begin{align}\label{with0} \widetilde{p}
&=\frac{1}{2}\ e^{-q(\sigma)}\sin^2(u+\Theta)q'(\sigma){|T\Gamma^{a}v|^2}\nonumber\\ &-\frac{1}{2}e^{-q(\sigma)}\sin(2(u+\Theta))\big(\partial_t \Gamma^{a}vQ(u+\Theta,\Gamma^{a}v)+\partial_i \Gamma^{a}vQ_{0i}(u+\Theta,\Gamma^{a}v)\big)\nonumber\\ &+\frac{1}{2}e^{-q(\sigma)}\cos^2(u+\Theta)\partial_t \Gamma^{a}v Q_{\mu\nu} \big(u+\Theta,Q^{\mu\nu}(\Gamma^a\Theta,v)\big)\nonumber\\ &+\frac{1}{2}e^{-q(\sigma)}\cos^2(u+\Theta)Q_{\mu\nu}(\partial_tu+\partial_t\Theta,\Gamma^{a}v)Q^{\mu\nu}(u+\Theta,\Gamma^av)\nonumber\\ &-e^{-q(\sigma)}q'(\sigma)\cos^2(u+\Theta)T_{\nu} \Gamma^{a}v\partial_{\mu}(u+\Theta)Q^{\mu\nu}(u+\Theta,\Gamma^av)\nonumber\\ &+\frac{1}{4}e^{-q(\sigma)}q'(\sigma)\cos^2(u+\Theta)Q_{\mu\nu}(u+\Theta,\Gamma^{a}v)Q^{\mu\nu}(u+\Theta,\Gamma^av)\nonumber\\ &-\frac{1}{4}e^{-q(\sigma)}\sin(2(u+\Theta))Q_{\mu\nu}(u+\Theta,\Gamma^{a}v)Q^{\mu\nu}(u+\Theta,\Gamma^av)\nonumber\\ & -\frac{1}{2}e^{-q(\sigma)}\cos^2(u+\Theta)Q_{\mu\nu}(\partial_tv,\Gamma^au)Q^{\mu\nu}(\Gamma^au,v)\nonumber\\ & -\frac{1}{2}e^{-q(\sigma)}\cos^2(u+\Theta)\partial_t \Gamma^{a}uQ_{\mu\nu} \big(v,Q^{\mu\nu}(\Gamma^a\Theta,v)\big)\nonumber\\ &+e^{-q(\sigma)}q'(\sigma)\cos^2(u+\Theta)T_{\nu} \Gamma^{a}u\partial_{\mu}vQ^{\mu\nu}(\Gamma^au,v)\nonumber\\ &-\frac{1}{4}e^{-q(\sigma)}q'(\sigma)\cos^2(u+\Theta)Q_{\mu\nu}(v,\Gamma^au)Q^{\mu\nu}(\Gamma^au,v)\nonumber\\ &-\frac{1}{2}e^{-q(\sigma)}\sin(2(u+\Theta))\partial_t \Gamma^{a}uQ_{\mu\nu}(v,u+\Theta)Q^{\mu\nu}(\Gamma^au,v)\nonumber\\ &+\frac{1}{4}e^{-q(\sigma)}\sin(2(u+\Theta))(\partial_tu+\partial_t\Theta)Q_{\mu\nu}(v,\Gamma^au)Q^{\mu\nu}(\Gamma^au,v)\nonumber\\ &-\frac{1}{2}e^{-q(\sigma)}\cos^2(u+\Theta)Q_{\mu\nu}(\partial_tv, \Gamma^au)Q^{\mu\nu}(u+\Theta,\Gamma^av)\nonumber\\ &-\frac{1}{2}e^{-q(\sigma)}\cos^2(u+\Theta)Q_{\mu\nu}(v, \Gamma^au)Q^{\mu\nu}(\partial_tu+\partial_t\Theta,\Gamma^av)\nonumber\\ &+e^{-q(\sigma)}q'(\sigma)\cos^2(u+\Theta)T_{\nu} \Gamma^{a}u\partial_{\mu}vQ^{\mu\nu}(u+\Theta,\Gamma^av) \end{align} \begin{align} &-\frac{1}{2}e^{-q(\sigma)}\sin(2(u+\Theta))\partial_t \Gamma^{a}uQ_{\mu\nu}(v,u+\Theta)Q^{\mu\nu}(u+\Theta,\Gamma^av)\nonumber\\ &+\frac{1}{2}e^{-q(\sigma)}\sin(2(u+\Theta))(\partial_tu+\partial_t\Theta)Q_{\mu\nu}(v,\Gamma^{a}u)Q^{\mu\nu}(u+\Theta,\Gamma^av)\nonumber\\ &-e^{-q(\sigma)}q'(\sigma)\cos^2(u+\Theta)T_{\nu} \Gamma^{a}v\partial_{\mu}(u+\Theta)Q^{\mu\nu}(\Gamma^au,v)\nonumber\\ &+\frac{1}{2}e^{-q(\sigma)}q'(\sigma)\cos^2(u+\Theta)Q_{\mu\nu}(u+\Theta,\Gamma^{a}v)Q^{\mu\nu}(\Gamma^au,v). \end{align}
By \eqref{rule1}, \eqref{rule2}, \eqref{rule3}, \eqref{rule4} and the divergence theorem, we can get \begin{align}\label{rule5} &\frac{d}{dt}\int_{\mathbb{R}^2}\big(e_0(t,x)-\widetilde{ e}(t,x)\big) dx+\int_{\mathbb{R}^2}p_0(t,x) dx\nonumber\\
&\leq \int_{\mathbb{R}^2}|\widetilde{p}(t,x)| dx+\int_{\mathbb{R}^2}|\langle e^{-q(\sigma)}\partial_t \Gamma^{a}u, f_a\rangle| dx+\int_{\mathbb{R}^2}|\langle e^{-q(\sigma)}\partial_t \Gamma^{a}v, g_a\rangle| dx. \end{align} Noting \begin{align} \widetilde{e}_0
&=\frac{1}{2} e^{-q(\sigma)}\sin^2(u+\Theta)|D \Gamma^{a}v|^2\nonumber\\ &+\frac{1}{2}e^{-q(\sigma)}\cos^2(u+\Theta)\big(2\partial_t \Gamma^{a}v\partial_{\mu}\Theta Q^{\mu0}(\Theta,\Gamma^av)-\frac{1}{2}Q_{\mu\nu}(\Theta,\Gamma^{a}v)Q^{\mu\nu}(\Theta,\Gamma^av)\big)\nonumber\\
&=\frac{1}{2} e^{-q(\sigma)}\sin^2(u+\Theta)|D \Gamma^{a}v|^2\nonumber\\
&+\frac{1}{2}e^{-q(\sigma)}\cos^2(u+\Theta)\big(|\partial_t \Theta|^2|\nabla \Gamma^{a}v|^2-|\nabla \Theta|^2|\partial_t \Gamma^{a}v|^2-\frac{1}{2}Q_{ij}(\Theta,\Gamma^{a}v)Q_{ij}(\Theta,\Gamma^{a}v)\big)\nonumber\\
&=\frac{1}{2} e^{-q(\sigma)}\sin^2(u+\Theta)|D \Gamma^{a}v|^2\nonumber\\
&+\frac{1}{2}e^{-q(\sigma)}\cos^2(u+\Theta)\big(Q(\Theta,\Theta)|\nabla \Gamma^{a}v|^2+(\nabla \Theta\cdot \nabla \Gamma^{a}v)^2\big), \end{align} we have \begin{align}\label{hj89eeee99} &e_0-\widetilde{e}_0\nonumber\\
&=\frac{1}{2}e^{-q(\sigma)}|D \Gamma^{a}u|^2+\frac{1}{2}e^{-q(\sigma)}\cos^2(u+\Theta)|\partial_t \Gamma^{a}v|^2\nonumber\\
&+\frac{1}{2}e^{-q(\sigma)}\cos^2(u+\Theta)\big((1-Q(\Theta,\Theta))|\nabla \Gamma^{a}v|^2-(\nabla \Theta\cdot \nabla \Gamma^{a}v)^2\big)\nonumber\\
&\geq \frac{1}{2}e^{-q(\sigma)}|D \Gamma^{a}u|^2+\frac{1}{2}e^{-q(\sigma)}\cos^2(u+\Theta)|\partial_t \Gamma^{a}v|^2\nonumber\\
&+\frac{1}{2}e^{-q(\sigma)}\cos^2(u+\Theta)(1-|\partial_t\Theta|^2)|\nabla \Gamma^{a}v|^2. \end{align} By \eqref{hj89eeee99}, \eqref{e111990hjuu}, Remark \ref{rem9999888}
and the smallness of $|u|, |Du|$ and $|Dv|$, we can obtain that there exists a positive constant $c_2=c_2(\widetilde{\lambda}_0,\widetilde{\lambda}_1)$ such that \begin{align}\label{rtt56666666} c_2^{-1}e_0\leq e_0-\widetilde{e}=e_0-\widetilde{e}_0-e_1 \leq c_2e_0. \end{align} \par Now we will estimate all the terms on the right hand side of \eqref{rule5}. Thanks to \eqref{with0} and Lemma \ref{QL}, we have the pointwise estimate
\begin{align}
|\widetilde{p}|&\leq C\big(|u|^2_{\Gamma,2}+|v|^2_{\Gamma,2}+|\Theta|^2_{\Gamma,8}\big)\big(|Du|_{\Gamma,6}+|Dv|_{\Gamma,6}\big)(\sum_{||b|\leq 6}|T\Gamma^{b}u|+\sum_{||b|\leq 6}|T\Gamma^{b}v|)\nonumber\\
&+C\langle t\rangle^{-1}\big(|u|^2_{\Gamma,2}+|v|^2_{\Gamma,2}+|\Theta|^2_{\Gamma,8}\big)\big(|Du|_{\Gamma,6}+|Dv|_{\Gamma,6}\big)^2. \end{align} Thus we have \begin{align}\label{y78900}
&\|\widetilde{p}(t,\cdot)\|_{L^1(\mathbb{R}^2)}\nonumber\\ &\leq C\langle t\rangle^{-1}\big(\mathcal {X}^2_2(u(t))+\mathcal {X}^2_2(v(t))+\mathcal {X}^2_8({\Theta}(t))\big)\big(E_7^{\frac{1}{2}}(u(t))+E_7^{\frac{1}{2}}(v(t))\big)\big(\mathcal {E}_7^{\frac{1}{2}}(u(t))+\mathcal {E}_7^{\frac{1}{2}}(v(t))\big) \nonumber\\ &+ C \langle t\rangle^{-2}\big(\mathcal {X}^2_2(u(t))+\mathcal {X}^2_2(v(t))+\mathcal {X}^2_8({\Theta}(t))\big)\big(E_7(u(t))+E_7(v(t))\big). \end{align} It follows from \eqref{xianxingeeeee}£¬ \eqref{hju7899}, \eqref{67yu969} and Lemma \ref{Linfty} that \begin{align}\label{4890uoop}
\mathcal {X}_8({\Theta}(t))\leq C\big(\|\Theta_0\|_{W^{10,1}(\mathbb{R}^2)}+\|\Theta_1\|_{W^{9,1}(\mathbb{R}^2)}\big)\leq C\widetilde{\lambda}. \end{align}
\par Now we estimate $\|\partial_t \Gamma^{a}uf_a\|_{L^1(\mathbb{R}^2)}$ and $\|\partial_t \Gamma^{a}vg_a\|_{L^1(\mathbb{R}^2)}.$ It is obvious that \begin{align}
&\|\partial_t \Gamma^{a}uf_a\|_{L^1(\mathbb{R}^2)}+\|\partial_t \Gamma^{a}vg_a\|_{L^1(\mathbb{R}^2)}\nonumber\\
&\leq \big({E}_7^{\frac{1}{2}}(u(t))+ {E}_7^{\frac{1}{2}}(v(t))\big)\big(\|f_a\|_{L^2(\mathbb{R}^2)}+\|g_a\|_{L^2(\mathbb{R}^2)}\big) \end{align} and \begin{align}\label{fgtyyyeee}
&\|f_a\|_{L^2(\mathbb{R}^2)}+\|g_a\|_{L^2(\mathbb{R}^2)}\nonumber\\
&\leq \|\Gamma^{a}F_1(u+\Theta, D(u+\Theta), Dv)\|_{L^2(\mathbb{R}^2)}+\| \Gamma^{a}G_1(u+\Theta, D(u+\Theta), Dv)\|_{L^2(\mathbb{R}^2)}\nonumber\\ &+
\|\big[\Gamma^{a}, a_{\mu\nu}(u+\Theta, Dv)\partial_{\mu}\partial_{\nu}\big](u+\Theta)\|_{L^2(\mathbb{R}^2)}+\|\big[\Gamma^a,b_{\mu\nu}(u+\Theta, D(u+\Theta), Dv)\partial_{\mu}\partial_{\nu}\big]v\|_{L^2(\mathbb{R}^2)}\nonumber\\ &+
\|\big[\Gamma^{a}, c_{\mu\nu}(u+\Theta,D(u+\Theta), Dv)\partial_{\mu}\partial_{\nu}\big](u+\Theta)\|_{L^2(\mathbb{R}^2)}+\|[\Box, \Gamma^{a}]u\|_{L^2(\mathbb{R}^2)}\nonumber\\
&+\|\big[\Gamma^a,d_{\mu\nu}(u+\Theta, D(u+\Theta), Dv)\partial_{\mu}\partial_{\nu}\big]v\|_{L^2(\mathbb{R}^2)}+\|[\Box, \Gamma^{a}]v\|_{L^2(\mathbb{R}^2)}. \end{align} We will only focus on the estimates of the first and second parts on the right hand side of \eqref{fgtyyyeee}, the remaining parts can be treated similarly. In view of \eqref{PF11111} and \eqref{PF1dddd1111}, we have \begin{align}\label{fgtyyyeee88999}
& \|\Gamma^{a}F_1(u+\Theta, D(u+\Theta), Dv)_{L^2(\mathbb{R}^2)}+\| \Gamma^{a}G_1(u+\Theta, D(u+\Theta), Dv)\|_{L^2(\mathbb{R}^2)}\nonumber\\
&\leq \|\sin(2(u+\Theta))Q(v,v)\|_{\Gamma,6,2}+\|\sin(2(u+\Theta))Q(u+\Theta,v)\|_{\Gamma,6,2}\nonumber\\
&+\|\sin(2(u+\Theta))Q_{\mu\nu}(u+\Theta,v)Q^{\mu\nu}(u+\Theta,v)\|_{\Gamma,6,2}. \end{align} It follows from Lemma \ref{composite} and Lemma \ref{QL} that \begin{align}\label{56777}
&\|\sin(2(u+\Theta))Q(v,v)\|_{\Gamma,6,2}\nonumber\\
&\leq C\sum_{|b|+|\beta|\leq 6}\|\Gamma^{b}\sin(2(u+\Theta))\Gamma^{\beta}Q(v,v)\|_{L^2(\mathbb{R}^2)}\nonumber\\
&\leq C\sum_{|b|+|c|+|d|\leq 6}\|\Gamma^{b}u D\Gamma^{c}v T\Gamma^{d}v\|_{L^2(\mathbb{R}^2)}+ C\sum_{|b|+|c|+|d|\leq 6}\|\Gamma^{b}\Theta D\Gamma^{c}v T\Gamma^{d}v\|_{L^2(\mathbb{R}^2)}. \end{align}
For $|b|+|c|+|d|\leq 6$, if $|b|+|c|\leq 3$, we have \begin{align}\label{fgrtt}
&\|\Gamma^{b}u D\Gamma^{c}v T\Gamma^{d}v\|_{L^2(\mathbb{R}^2)}\nonumber\\
&\leq C\langle t\rangle^{-1} \|\langle t+r\rangle^{\frac{1}{2}}\langle t-r\rangle^{\frac{1}{2}}\Gamma^{b}u\|_{L^{\infty}(\mathbb{R}^2)} \|\langle t+r\rangle^{\frac{1}{2}}\langle t-r\rangle^{\frac{1}{2}} D\Gamma^{c}v\|_{L^{\infty}(\mathbb{R}^2)} \|\langle t-r\rangle^{-1} T\Gamma^{d}v\|_{L^2(\mathbb{R}^2)}\nonumber\\ &\leq C\langle t\rangle^{-1} \mathcal {X}_{4}(u(t))\mathcal {X}_{4}(v(t))\mathcal {E}^{\frac{1}{2}}_{7}(v(t)). \end{align}
If $|b|+|d|\leq 3$, by \eqref{gooddecay} we get \begin{align}\label{xiaopjk}
&\|\Gamma^{b}u D\Gamma^{c}v T\Gamma^{d}v\|_{L^2(\mathbb{R}^2)}\nonumber\\
&\leq C\langle t\rangle^{-2} \|\langle t+r\rangle^{\frac{1}{2}}\langle t-r\rangle^{\frac{1}{2}}\Gamma^{b}u\|_{L^{\infty}(\mathbb{R}^2)} \|D\Gamma^{c}v\|_{L^{2}(\mathbb{R}^2)} \|\langle t+r\rangle^{\frac{1}{2}}\langle t-r\rangle^{\frac{1}{2}}\Gamma^{d+1}v\|_{L^{\infty}(\mathbb{R}^2)}\nonumber\\ &\leq C\langle t\rangle^{-2} \mathcal {X}_{4}(u(t))\mathcal {X}_{4}(v(t)) {E}^{\frac{1}{2}}_{7}(v(t)). \end{align}
If $|c|+|d|\leq 3$, by Hardy inequality \eqref{Hardy} and \eqref{gooddecay} we have \begin{align}
&\|\Gamma^{b}u D\Gamma^{c}v T\Gamma^{d}v\|_{L^2(\mathbb{R}^2)}\nonumber\\
&\leq C\langle t\rangle^{-2} \|\langle t-r\rangle^{-1}\Gamma^{b}u\|_{L^{2}(\mathbb{R}^2)} \|\langle t+r\rangle^{\frac{1}{2}}\langle t-r\rangle^{\frac{1}{2}}D\Gamma^{c}v\|_{L^{{\infty}}(\mathbb{R}^2)} \|\langle t+r\rangle^{\frac{1}{2}}\langle t-r\rangle^{\frac{1}{2}}\Gamma^{d+1}v\|_{L^{\infty}(\mathbb{R}^2)}\nonumber\\ &\leq C\langle t\rangle^{-2} \mathcal {X}_{4}(v(t))\mathcal {X}_{4}(v(t)) {E}^{\frac{1}{2}}_{7}(u(t)). \end{align} Thus we obtain \begin{align}\label{shj799}
&\sum_{|b|+|c|+|d|\leq 6}\|\Gamma^{b}u D\Gamma^{c}v T\Gamma^{d}v\|_{L^2(\mathbb{R}^2)}\nonumber\\ &\leq C\langle t\rangle^{-1} \big(\mathcal {X}^2_{4}(u(t))+\mathcal {X}^2_{4}(v(t))\big)\big(\mathcal {E}^{\frac{1}{2}}_{7}(u(t))\mathcal +\mathcal{E}^{\frac{1}{2}}_{7}(v(t))\big)\nonumber\\ &+ C\langle t\rangle^{-2}\big(\mathcal {X}^2_{4}(u(t))+\mathcal {X}^2_{4}(v(t))\big)\big( {E}^{\frac{1}{2}}_{7}(u(t)) +{E}^{\frac{1}{2}}_{7}(v(t))\big). \end{align}
For the second term on the right hand side of \eqref{56777}, if $|b|+|c|\leq 3$, similarly to \eqref{fgrtt}, we have \begin{align}\label{fgrtt111}
\|\Gamma^{b}\Theta D\Gamma^{c}v T\Gamma^{d}v\|_{L^2(\mathbb{R}^2)}\leq C\langle t\rangle^{-1} \mathcal {X}_{4}(\Theta(t))\mathcal {X}_{4}(v(t))\mathcal {E}^{\frac{1}{2}}_{7}(v(t)). \end{align}
if $|b|+|d|\leq 3$ or $|c|+|d|\leq 3$, similarly to \eqref{xiaopjk}, we have \begin{align}\label{xiaopjk1111}
\|\Gamma^{b}\Theta D\Gamma^{c}v T\Gamma^{d}v\|_{L^2(\mathbb{R}^2)}\leq C\langle t\rangle^{-2} \mathcal {X}_{8}(\Phi(t))\mathcal {X}_{4}(v(t)) {E}^{\frac{1}{2}}_{7}(v(t)). \end{align} Thus we have \begin{align}\label{frt566tttt}
&\sum_{|b|+|c|+|d|\leq 6}\|\Gamma^{b}\Theta D\Gamma^{c}v T\Gamma^{d}v\|_{L^2(\mathbb{R}^2)}\nonumber\\ &\leq C\langle t\rangle^{-1} \big(\mathcal {X}^2_{8}(\Phi(t))+\mathcal {X}^2_{4}(v(t))\big)\mathcal{E}^{\frac{1}{2}}_{7}(v(t)) + C\langle t\rangle^{-2}\big(\mathcal {X}^2_{8}(\Phi(t))+\mathcal {X}^2_{4}(v(t))\big){E}^{\frac{1}{2}}_{7}(v(t)). \end{align} By \eqref{56777}, \eqref{shj799} and \eqref{frt566tttt}, we have \begin{align}\label{56777yyyy}
&\|\sin(2(u+\Theta))Q(v,v)\|_{\Gamma,6,2}\nonumber\\ &\leq C\langle t\rangle^{-1} \big(\mathcal {X}^2_{4}(u(t))+\mathcal {X}^2_{4}(v(t))+\mathcal {X}^2_{8}(\Phi(t))\big)\big(\mathcal {E}^{\frac{1}{2}}_{7}(u(t))\mathcal +\mathcal{E}^{\frac{1}{2}}_{7}(v(t))\big)\nonumber\\ &+ C\langle t\rangle^{-2}\big(\mathcal {X}^2_{4}(u(t))+\mathcal {X}^2_{4}(v(t))+\mathcal {X}^2_{8}(\Phi(t))\big)\big( {E}^{\frac{1}{2}}_{7}(u(t)) +{E}^{\frac{1}{2}}_{7}(v(t))\big). \end{align}
\par Similarly to \eqref{56777yyyy}, the second and third part on the right hand side of \eqref{fgtyyyeee88999} can be estimated by the same way and admit the same upper bound.\par From the above discussion, we can get \begin{align}\label{ccfr5666699}
&\|\partial_t \Gamma^{a}uf_a\|_{L^1(\mathbb{R}^2)}+\|\partial_t \Gamma^{a}vg_a\|_{L^1(\mathbb{R}^2)}\nonumber\\ &\leq C\langle t\rangle^{-1} \big(\mathcal {X}^2_{4}(u(t))+\mathcal {X}^2_{4}(v(t))+\mathcal {X}^2_{8}(\Phi(t))\big)\big(\mathcal {E}^{\frac{1}{2}}_{7}(u(t)) +\mathcal{E}^{\frac{1}{2}}_{7}(v(t))\big)\big( {E}^{\frac{1}{2}}_{7}(u(t)) +{E}^{\frac{1}{2}}_{7}(v(t))\big)\nonumber\\ &+ C\langle t\rangle^{-2}\big(\mathcal {X}^2_{4}(u(t))+\mathcal {X}^2_{4}(v(t))+\mathcal {X}^2_{8}(\Phi(t))\big)\big( {E}_{7}(u(t)) +{E}_{7}(v(t))\big). \end{align} \par Combing \eqref{rule5}, \eqref{rtt56666666}, \eqref{y78900}, \eqref{4890uoop} and \eqref{ccfr5666699}, we can get \begin{align} &{E}_{7}(u(t)) +{E}_{7}(v(t))+\int_{0}^{t} \mathcal{E}_{7}(u(t)) +\mathcal{E}_{7}(v(t)) dt\nonumber\\ &\leq C\varepsilon^2+C\int_0^{t}\langle t\rangle^{-1} \big(\mathcal {X}^2_{4}(u(t))+\mathcal {X}^2_{4}(v(t))\big)\big(\mathcal {E}^{\frac{1}{2}}_{7}(u(t)) +\mathcal{E}^{\frac{1}{2}}_{7}(v(t))\big)\big( {E}^{\frac{1}{2}}_{7}(u(t)) +{E}^{\frac{1}{2}}_{7}(v(t))\big)dt\nonumber\\ &+ C\int_0^{t}\langle t\rangle^{-2}\big(\mathcal {X}^2_{4}(u(t))+\mathcal {X}^2_{4}(v(t))\big)\big( {E}_{7}(u(t)) +{E}_{7}(v(t))\big)dt\nonumber\\ &+ C\int_0^{t}\langle t\rangle^{-1} \mathcal {X}^2_{8}(\Phi(t))\big(\mathcal {E}^{\frac{1}{2}}_{7}(u(t)) +\mathcal{E}^{\frac{1}{2}}_{7}(v(t))\big)\big( {E}^{\frac{1}{2}}_{7}(u(t)) +{E}^{\frac{1}{2}}_{7}(v(t))\big)dt\nonumber\\ &+ C\int_0^{t}\langle t\rangle^{-2}\mathcal {X}^2_{8}(\Phi(t))\big( {E}_{7}(u(t)) +{E}_{7}(v(t))\big)dt \nonumber\\ &\leq C\varepsilon^2+16CA_1^2A_2^2\varepsilon^4+ C\int_0^{t}\langle t\rangle^{-2}\big( {E}_{7}(u(t)) +{E}_{7}(v(t))\big)dt\nonumber\\ &+\frac{1}{100}\int_{0}^{t} \mathcal{E}_{7}(u(t)) +\mathcal{E}_{7}(v(t)) dt. \end{align} Then we have \begin{align} {E}_{7}(u(t)) +{E}_{7}(v(t))\leq C\varepsilon^2+16CA_1^2A_2^2\varepsilon^4+ C\int_0^{t}\langle t\rangle^{-2}\big( {E}_{7}(u(t)) +{E}_{7}(v(t))\big)dt \end{align} By Gronwall's inequality, we get \begin{align}\label{gji899ener} {E}^{\frac{1}{2}}_{7}(u(t)) +{E}^{\frac{1}{2}}_{7}(v(t))\leq C_0\varepsilon+4C_0A_1A_2\varepsilon^2. \end{align} \subsection{$L^{\infty}$ estimates} By Lemma \ref{Linfty}, we have \begin{align} &\mathcal {X}_{4}(u(t))+\mathcal {X}_{4}(v(t))\nonumber\\
&\leq C\varepsilon+C\int_{0}^{t}\|F(u+\Theta, D(u+\Theta), Dv, D^2(u+\Theta), D^2v)\|_{\Gamma, 5,1}dt\nonumber\\
&~~~~~~~~+C\int_{0}^{t}\|G(u+\Theta, D(u+\Theta), Dv, D^2(u+\Theta), D^2v)\|_{\Gamma, 5,1}dt. \end{align} In view of \eqref{333333000}--\eqref{PF1dddd1111}, we have \begin{align}\label{righttyy}
&\|F(u+\Theta, D(u+\Theta), Dv, D^2(u+\Theta), D^2v)\|_{\Gamma, 5,1}\nonumber\\
&~~~~~~~~~~+\|G(u+\Theta, D(u+\Theta), Dv, D^2(u+\Theta), D^2v)\|_{\Gamma, 5,1}\nonumber\\
&\leq \|\sin(2(u+\Theta))Q(v,v)\|_{\Gamma, 5,1}+ \|\sin(2(u+\Theta))Q(u+\Theta,v)\|_{\Gamma, 5,1}\nonumber\\
&+\|\sin^2(u+\Theta) \Box v\|_{\Gamma, 5,1}+\|\sin(2(u+\Theta))Q_{\mu\nu}(u+\Theta,v)Q^{\mu\nu}(u+\Theta,v)\|_{\Gamma, 5,1}\nonumber\\
&+\|\cos^2(u+\Theta) Q_{\mu\nu}
\big(v,Q^{\mu\nu}(u+\Theta,v)\big)\|_{\Gamma, 5,1}\nonumber\\
&+\|\cos^2(u+\Theta) Q_{\mu\nu}
\big(u+\Theta,Q^{\mu\nu}(u+\Theta,v)\big)\|_{\Gamma, 5,1}. \end{align} We will focus on the first three terms on the right hand side of \eqref{righttyy}, the remaining terms can be treated similarly. \par For the first term on the right hand side of \eqref{righttyy}, it follows from Lemma \ref{composite} and Lemma \ref{QL} that \begin{align}\label{5677yyy7}
&\|\sin(2(u+\Theta))Q(v,v)\|_{\Gamma,5,1}\nonumber\\
&\leq C\sum_{|b|+|\beta|\leq 5}\|\Gamma^{b}\sin(2(u+\Theta))\Gamma^{\beta}Q(v,v)\|_{L^1(\mathbb{R}^2)}\nonumber\\
&\leq C\langle t\rangle^{-1}\sum_{|b|+|c|+|d|\leq 5}\|\Gamma^{b}u D\Gamma^{c}v \Gamma^{d+1}v\|_{L^1(\mathbb{R}^2)}+ C\langle t\rangle^{-1}\sum_{|b|+|c|+|d|\leq 6}\|\Gamma^{b}\Theta D\Gamma^{c}v \Gamma^{d+1}v\|_{L^1(\mathbb{R}^2)}. \end{align}
For $|b|+|c|+|d|\leq 5$, if $|b|+|d|\leq 3$, we have \begin{align}\label{xiaoyyypjk}
&\|\Gamma^{b}u D\Gamma^{c}v \Gamma^{d+1}v\|_{L^1(\mathbb{R}^2)}\nonumber\\
&\leq C\langle t\rangle^{-1} \|D\Gamma^{c}v\|_{L^{2}(\mathbb{R}^2)} \|\langle t-r\rangle^{-\frac{1}{2}}\langle t+r\rangle^{\frac{1}{2}}\langle t-r\rangle^{\frac{1}{2}}\Gamma^{b}u\|_{L^{4}(\mathbb{R}^2)}\nonumber\\
&~~~\cdot\|\langle t-r\rangle^{-\frac{1}{2}}\langle t+r\rangle^{\frac{1}{2}}\langle t-r\rangle^{\frac{1}{2}}\Gamma^{d+1}v\|_{L^{4}(\mathbb{R}^2)}\nonumber\\
&\leq C\langle t\rangle^{-1} \|\langle t-r\rangle^{-\frac{1}{2}}\|^2_{L^{4}(|x|\leq t+1)}\mathcal {X}_{4}(u(t))\mathcal {X}_{4}(v(t)) {E}^{\frac{1}{2}}_{7}(v(t))\nonumber\\ &\leq C\langle t\rangle^{-\frac{1}{2}} \mathcal {X}_{4}(u(t))\mathcal {X}_{4}(v(t)) {E}^{\frac{1}{2}}_{7}(v(t)) \end{align}
If $|b|+|c|\leq 3$, by Hardy inequality \eqref{Hardy} and \eqref{gooddecay345}, we have \begin{align}\label{fgrtyyyjjt}
&\|\Gamma^{b}u D\Gamma^{c}v \Gamma^{d+1}v\|_{L^1(\mathbb{R}^2)}\nonumber\\
&\leq C\langle t\rangle^{-1} \|\langle t-r\rangle^{-1}\Gamma^{d+1}v\|_{L^2(\mathbb{R}^2)}\|\langle t-r\rangle^{-\frac{1}{2}}\langle t+r\rangle^{\frac{1}{2}}\langle t-r\rangle^{\frac{1}{2}}\Gamma^{b}u\|_{L^{4}(\mathbb{R}^2)}\nonumber\\
&~~~\cdot\|\langle t-r\rangle^{-\frac{1}{2}}\langle t+r\rangle^{\frac{1}{2}}\langle t-r\rangle^{\frac{1}{2}}\Gamma^{c+1}v\|_{L^{4}(\mathbb{R}^2)}\nonumber\\
&\leq C\langle t\rangle^{-1} \|\langle t-r\rangle^{-\frac{1}{2}}\|^2_{L^{4}(|x|\leq t+1)}\mathcal {X}_{4}(u(t))\mathcal {X}_{4}(v(t)) {E}^{\frac{1}{2}}_{7}(v(t))\nonumber\\ &\leq C\langle t\rangle^{-\frac{1}{2}} \mathcal {X}_{4}(u(t))\mathcal {X}_{4}(v(t)) {E}^{\frac{1}{2}}_{7}(v(t)). \end{align}
Similarly to \eqref{fgrtyyyjjt}, if $|c|+|d|\leq 3$, it holds that \begin{align}\label{xyaohi89}
\|\Gamma^{b}u D\Gamma^{c}v \Gamma^{d+1}v\|_{L^1(\mathbb{R}^2)}\leq C\langle t\rangle^{-\frac{1}{2}} \mathcal {X}_{4}(v(t))\mathcal {X}_{4}(v(t)) {E}^{\frac{1}{2}}_{7}(u(t)). \end{align} Thus we obtain \begin{align}\label{ui8900}
&\sum_{|b|+|c|+|d|\leq 5}\|\Gamma^{b}u D\Gamma^{c}v \Gamma^{d+1}v\|_{L^1(\mathbb{R}^2)}\nonumber\\ &\leq C\langle t\rangle^{-\frac{1}{2}}\big( \mathcal {X}^2_{4}(u(t))+\mathcal {X}^2_{4}(v(t))\big) \big({E}^{\frac{1}{2}}_{7}(u(t))+{E}^{\frac{1}{2}}_{7}(v(t))\big). \end{align}
For the second part on the right hand side of \eqref{5677yyy7}, for $|b|+|c|+|d|\leq 5$, if $|b|+|d|\leq 3$, similarly to \eqref{xiaoyyypjk}, we get \begin{align}\label{xiaoyyypjrttttk}
\|\Gamma^{b}\Theta D\Gamma^{c}v \Gamma^{d+1}v\|_{L^1(\mathbb{R}^2)} \leq C\langle t\rangle^{-\frac{1}{2}} \mathcal {X}_{4}(\Phi(t))\mathcal {X}_{4}(v(t)) {E}^{\frac{1}{2}}_{7}(v(t)). \end{align}
If $|b|+|c|\leq 3$ or $|c|+|d|\leq 3$, similarly to \eqref{fgrtyyyjjt}, we have \begin{align}\label{fgrtyyyoojjt}
\|\Gamma^{b}\Theta D\Gamma^{c}v \Gamma^{d+1}v\|_{L^1(\mathbb{R}^2)} \leq C\langle t\rangle^{-\frac{1}{2}} \mathcal {X}_{8}(\Phi(t))\mathcal {X}_{4}(v(t)) {E}^{\frac{1}{2}}_{7}(v(t)). \end{align} Thus we obtain \begin{align}\label{xyu7999}
\sum_{|b|+|c|+|d|\leq 6}\|\Gamma^{b}\Theta D\Gamma^{c}v \Gamma^{d+1}v\|_{L^1(\mathbb{R}^2)}\leq C\langle t\rangle^{-\frac{1}{2}} \big(\mathcal {X}^2_{8}(\Phi(t))+\mathcal {X}^2_{4}(v(t))\big) {E}^{\frac{1}{2}}_{7}(v(t)). \end{align} It follows from \eqref{5677yyy7}, \eqref{ui8900} and \eqref{xyu7999} that \begin{align}\label{hj7899900}
&\|\sin(2(u+\Theta))Q(v,v)\|_{\Gamma,5,1}\nonumber\\ &\leq C\langle t\rangle^{-\frac{3}{2}}\big( \mathcal {X}^2_{4}(u(t))+\mathcal {X}^2_{4}(v(t))+\mathcal {X}^2_{8}(\Phi(t))\big) \big({E}^{\frac{1}{2}}_{7}(u(t))+{E}^{\frac{1}{2}}_{7}(v(t))\big). \end{align} \par Similarly to \eqref{hj7899900}, the second term on the right hand side of \eqref{righttyy} can be estimated by the same by and admits the same upper bound.\par For the third term on the right hand side of \eqref{righttyy}, by Lemma \ref{composite} and Lemma \ref{uu679yui}, we get \begin{align}
&\|\sin^2(u+\Theta) \Box v\|_{\Gamma, 5,1}\nonumber\\
&\leq C\sum_{|b|+|\beta|\leq 5}\|\Gamma^{\beta}\sin^2(2(u+\Theta))\Box\Gamma^{b}v\|_{L^1(\mathbb{R}^2)}\nonumber\\
&\leq C\langle t\rangle^{-1}\sum_{|b|+|c|+|d|\leq 6}\|\Gamma^{c}u\Gamma^{d}uD\Gamma^{b}v\|_{L^1(\mathbb{R}^2)}+C\langle t\rangle^{-1}\sum_{|b|+|c|+|d|\leq 6}\|\Gamma^{c}u\Gamma^{d}\Theta D\Gamma^{b}v\|_{L^1(\mathbb{R}^2)}\nonumber\\
&+C\langle t\rangle^{-1}\sum_{|b|+|c|+|d|\leq 6}\|\Gamma^{c}\Theta\Gamma^{d}\Theta D\Gamma^{b}v\|_{L^1(\mathbb{R}^2)}. \end{align} Then similarly to \eqref{hj7899900}, we have \begin{align}\label{hj789fff4r449900}
&\|\sin^2(u+\Theta) \Box v\|_{\Gamma, 5,1}\nonumber\\ &\leq C\langle t\rangle^{-\frac{3}{2}}\big( \mathcal {X}^2_{4}(u(t))+\mathcal {X}^2_{4}(v(t))+\mathcal {X}^2_{8}(\Phi(t))\big) \big({E}^{\frac{1}{2}}_{7}(u(t))+{E}^{\frac{1}{2}}_{7}(v(t))\big). \end{align} \par From the above discussion, we obtain \begin{align}\label{dfttyyy56} &\mathcal {X}_{4}(u(t))+\mathcal {X}_{4}(v(t))\nonumber\\ &\leq C\varepsilon+C\int_{0}^{t}\langle t\rangle^{-\frac{3}{2}}\big( \mathcal {X}^2_{4}(u(t))+\mathcal {X}^2_{4}(v(t))+\mathcal {X}^2_{8}(\Phi(t))\big) \big({E}^{\frac{1}{2}}_{7}(u(t))+{E}^{\frac{1}{2}}_{7}(v(t))\big) dt\nonumber\\ &\leq C_1\varepsilon+2C_1A_1\varepsilon+8C_1A_1A_2^2\varepsilon^3. \end{align} \subsection{Conclusion of the proof} Noting \eqref{gji899ener} and \eqref{dfttyyy56}, we get \begin{align} \sup_{0\leq t\leq T}\big(E_{7}^{\frac{1}{2}}(u(t))+E_{7}^{\frac{1}{2}}(v(t))\big)\leq C_0\varepsilon+4C_0A_1A_2\varepsilon^2 \end{align} and \begin{align} \sup_{0\leq t\leq T}\big(\mathcal {X}_{4}(u(t))+\mathcal {X}_{4}(v(t))\big) \leq C_1\varepsilon+2C_1A_1\varepsilon+8C_1A_1A_2^2\varepsilon^3. \end{align} Assume that \begin{align} E_{7}^{\frac{1}{2}}(u(0))+E_{7}^{\frac{1}{2}}(v(0))\leq \widetilde{C}_0\varepsilon~~\text{and}~~ \mathcal {X}_{4}(u(0))+\mathcal {X}_{4}(v(0))\leq \widetilde{C}_1\varepsilon. \end{align} Take $A_1=\max\{4C_0,4 \widetilde{C}_0\}$, $A_2=\max\{8(C_1+2C_1A_1), 4 \widetilde{C}_1\}$ and $\varepsilon_0$ sufficiently small such that \begin{align} 16C_0A_2\varepsilon_0+32C_1A_1A_2\varepsilon_0^2\leq 1. \end{align} Then for any $0<\varepsilon\leq \varepsilon_0$, we have \begin{align} \sup_{0\leq t\leq T}\big(E_{7}^{\frac{1}{2}}(u(t))+E_{7}^{\frac{1}{2}}(v(t))\big)\leq A_1\varepsilon~~\text{and}~~\sup_{0\leq t\leq T}\big(\mathcal {X}_{4}(u(t))+\mathcal {X}_{4}(v(t))\big)\leq A_2\varepsilon, \end{align} which completes the proof of Theorem \ref{mainthm2}.
\end{document}
|
arXiv
|
Nonlinear instability of solutions in parabolic and hyperbolic diffusion
EECT Home
June 2013, 2(2): 423-440. doi: 10.3934/eect.2013.2.423
Asymptotic behavior of the solution to the Cauchy problem for the Timoshenko system in thermoelasticity of type III
Belkacem Said-Houari 1, and Radouane Rahali 2,
Division of Mathematical and Computer Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal 23955-6900
Mathematics Department, Annaba University, PO Box 12, Annaba, 23000, Algeria
Received September 2012 Revised December 2012 Published March 2013
In this paper, we investigate the decay property of a Timoshenko system in thermoelasticity of type III in the whole space where the heat conduction is given by the Green and Naghdi theory. Surprisingly, we show that the coupling of the Timoshenko system with the heat conduction of Green and Naghdi's theory slows down the decay of the solution. In fact we show that the $L^2$-norm of the solution decays like $(1+t)^{-1/8}$, while in the case of the coupling of the Timoshenko system with the Fourier or Cattaneo heat conduction, the decay rate is of the form $(1+t)^{-1/4}$ [25]. We point out that the decay rate of $(1+t)^{-1/8}$ has been obtained provided that the initial data are in $L^1( \mathbb{R})\cap H^s(\mathbb{R}), (s\geq 2)$. If the wave speeds of the first two equations are different, then the decay rate of the solution is of regularity-loss type, that is in this case the previous decay rate can be obtained only under an additional regularity assumption on the initial data. In addition, by restricting the initial data to be in $H^{s}\left( \mathbb{R}\right)\cap L^{1,\gamma }\left( \mathbb{R}\right) $ with $ \gamma \in \left[ 0,1\right] $, we can derive faster decay estimates with the decay rate improvement by a factor of $t^{-\gamma/4}$.
Keywords: Timoshenko, heat conduction, Decay rate, thermoelasticity..
Mathematics Subject Classification: 35B35, 35L55, 74D05, 93D15, 93D2.
Citation: Belkacem Said-Houari, Radouane Rahali. Asymptotic behavior of the solution to the Cauchy problem for the Timoshenko system in thermoelasticity of type III. Evolution Equations & Control Theory, 2013, 2 (2) : 423-440. doi: 10.3934/eect.2013.2.423
F. Ammar-Khodja, A. Benabdallah, J. E. Muñoz Rivera and R. Racke, Energy decay for Timoshenko systems of memory type,, J. Differential Equations, 194 (2003), 82. doi: 10.1016/S0022-0396(03)00185-2. Google Scholar
C. Cattaneo, Sulla conduzione del calore,, Atti. Sem. Mat. Fis. Univ. Modena., 3 (1949), 83. Google Scholar
D. S. Chandrasekharaiah, Hyperbolic thermoelasticity: A review of recent literature,, Appl. Mech. Rev., 51 (1998), 705. doi: 10.1115/1.3098984. Google Scholar
M. Dreher, R. Quintanilla and R. Racke, Ill-posed problems in thermomechanics,, Appl. Math. Lett., 22 (2009), 1374. doi: 10.1016/j.aml.2009.03.010. Google Scholar
A. E. Green and P. M. Naghdi, A re-examination of the basic postulates of thermomechanics,, Proc. Royal Society London. A, 432 (1991), 171. doi: 10.1098/rspa.1991.0012. Google Scholar
A. E. Green and P. M. Naghdi, On undamped heat waves in an elastic solid,, J. Thermal Stresses, 15 (1992), 253. doi: 10.1080/01495739208946136. Google Scholar
K. Ide, K. Haramoto and S. Kawashima, Decay property of regularity-loss type for dissipative Timoshenko system,, Math. Mod. Meth. Appl. Sci., 18 (2008), 647. doi: 10.1142/S0218202508002802. Google Scholar
K. Ide and S. Kawashima, Decay property of regularity-loss type and nonlinear effects for dissipative Timoshenko system,, Math. Mod. Meth. Appl. Sci., 18 (2008), 1001. doi: 10.1142/S0218202508002930. Google Scholar
R. Ikehata, Diffusion phenomenon for linear dissipative wave equations in an exterior domain,, J. Differential Equations, 186 (2002), 633. doi: 10.1016/S0022-0396(02)00008-6. Google Scholar
R. Ikehata, New decay estimates for linear damped wave equations and its application to nonlinear problem,, Math. Meth. Appl. Sci., 27 (2004), 865. doi: 10.1002/mma.476. Google Scholar
P. M Jordan, W. Dai and R. E Mickens, A note on the delayed heat equation: Instability with respect to initial data,, Mechanics Research Communications, 35 (2008), 414. doi: 10.1016/j.mechrescom.2008.04.001. Google Scholar
D. D. Joseph and L. Preziosi, Heat waves,, Rev. Mod. Physics, 61 (1989), 41. doi: 10.1103/RevModPhys.61.41. Google Scholar
S. A. Messaoudi, M. Pokojovy and B. Said-Houari, Nonlinear damped Timoshenko systems with second sound - global existence and exponential stability,, Math. Meth. Appl. Sci., 32 (2009), 505. doi: 10.1002/mma.1049. Google Scholar
S. A. Messaoudi and B. Said-Houari, Exponential stability in one-dimensional non-linear thermoelasticity with second sound,, Math. Methods Appl. Sci., 28 (2005), 205. doi: 10.1002/mma.556. Google Scholar
S. A. Messaoudi and B. Said-Houari, Energy decay in a Timoshenko-type system of thermoelasticity of type III,, J. Math. Anal. Appl., 348 (2008), 298. doi: 10.1016/j.jmaa.2008.07.036. Google Scholar
S. A. Messaoudi and B. Said-Houari, Energy decay in a Timoshenko-type system with history in thermoelasticity of type III,, Advances in Differential Equations, 14 (2009), 375. Google Scholar
R. Quintanilla and R. Racke, Stability in thermoelasticity of type III,, Discrete and Continuous Dynamical Systems B, 3 (2003), 383. doi: 10.3934/dcdsb.2003.3.383. Google Scholar
R. Quintanilla and R. Racke, Qualitative aspects in dual-phase-lag thermoelasticity,, SIAM J. Appl. Math., 66 (2006), 977. doi: 10.1137/05062860X. Google Scholar
R. Racke, "Lectures on Nonlinear Evolution Equations. Initial Value Problems,", Aspects of Mathematics, E19 (1992). Google Scholar
R. Racke, Thermoelasticity with second sound-exponential stability in linear and non-linear 1-d,, Math. Methods. Appl. Sci., 25 (2002), 409. doi: 10.1002/mma.298. Google Scholar
R. Racke and Y. Wang, Nonlinear well-posedness and rates of decay in thermoelasticity with second sound,, J. Hyperbolic Differ. Equ., 5 (2008), 25. doi: 10.1142/S021989160800143X. Google Scholar
M. Reissig and G. Y. Wang, Cauchy problems for linear thermoelastic systems of type III in one space variable,, Math. Methods Appl. Sci., 28 (2005), 1359. doi: 10.1002/mma.619. Google Scholar
J. E. Muñoz Rivera and R. Racke, Mildly dissipative nonlinear Timoshenko systems-global existence and exponential stability,, J. Math. Anal. Appl., 276 (2002), 248. doi: 10.1016/S0022-247X(02)00436-5. Google Scholar
J. E. Muñoz Rivera and H. D. Fernández Sare, Stability of Timoshenko systems with past history,, J. Math. Anal. Appl., 339 (2008), 482. doi: 10.1016/j.jmaa.2007.07.012. Google Scholar
B. Said-Houari and A. Kasimov, Decay property of Timoshenko system in thermoelasticity,, Math. Methods. Appl. Sci., 35 (2012), 314. doi: 10.1002/mma.1569. Google Scholar
H. D. Fernández Sare and R. Racke, On the stability of damped Timoshenko systems: Cattaneo versus Fourier law,, Arch. Rational Mech. Anal., 194 (2009), 221. doi: 10.1007/s00205-009-0220-2. Google Scholar
M. A. Tarabek, On the existence of smooth solutions in one-dimensional nonlinear thermoelasticity with second sound,, Quart. Appl. Math., 50 (1992), 727. Google Scholar
D. Y. Tzou, Thermal shock phenomena under high-rate response in solids,, Annual Review of Heat Transfer, 4 (1992), 111. Google Scholar
D. Y. Tzou, A unified field approach for heat conduction from macro to micro-scales,, J. Heat Transfer, 117 (1995), 8. doi: 10.1115/1.2822329. Google Scholar
Y.-G. Wang and L. Yang, $L^p$-$L^q$ decay estimates for Cauchy problems of linear thermoelastic systems with second sound in three dimensions,, Proc. Roy. Soc. Edinburgh Sect. A, 136 (2006), 189. doi: 10.1017/S0308210500004510. Google Scholar
X. Zhang and E. Zuazua, Decay of solutions of the system of thermoelasticity of type III,, Commun. Contemp. Math., 5 (2003), 25. doi: 10.1142/S0219199703000896. Google Scholar
Denis Mercier, Virginie Régnier. Decay rate of the Timoshenko system with one boundary damping. Evolution Equations & Control Theory, 2019, 8 (2) : 423-445. doi: 10.3934/eect.2019021
Sandra Carillo, Vanda Valente, Giorgio Vergara Caffarelli. Heat conduction with memory: A singular kernel problem. Evolution Equations & Control Theory, 2014, 3 (3) : 399-410. doi: 10.3934/eect.2014.3.399
Aymen Jbalia. On a logarithmic stability estimate for an inverse heat conduction problem. Mathematical Control & Related Fields, 2019, 9 (2) : 277-287. doi: 10.3934/mcrf.2019014
Tamara Fastovska. Decay rates for Kirchhoff-Timoshenko transmission problems. Communications on Pure & Applied Analysis, 2013, 12 (6) : 2645-2667. doi: 10.3934/cpaa.2013.12.2645
Zhuangyi Liu, Ramón Quintanilla. Time decay in dual-phase-lag thermoelasticity: Critical case. Communications on Pure & Applied Analysis, 2018, 17 (1) : 177-190. doi: 10.3934/cpaa.2018011
Corrado Mascia. Stability analysis for linear heat conduction with memory kernels described by Gamma functions. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3569-3584. doi: 10.3934/dcds.2015.35.3569
Xueke Pu, Boling Guo. Global existence and semiclassical limit for quantum hydrodynamic equations with viscosity and heat conduction. Kinetic & Related Models, 2016, 9 (1) : 165-191. doi: 10.3934/krm.2016.9.165
Micol Amar, Roberto Gianni. Laplace-Beltrami operator for the heat conduction in polymer coating of electronic devices. Discrete & Continuous Dynamical Systems - B, 2018, 23 (4) : 1739-1756. doi: 10.3934/dcdsb.2018078
Claudio Giorgi, Diego Grandi, Vittorino Pata. On the Green-Naghdi Type III heat conduction model. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2133-2143. doi: 10.3934/dcdsb.2014.19.2133
M. Carme Leseduarte, Ramon Quintanilla. Phragmén-Lindelöf alternative for an exact heat conduction equation with delay. Communications on Pure & Applied Analysis, 2013, 12 (3) : 1221-1235. doi: 10.3934/cpaa.2013.12.1221
Mohammed Aassila. On energy decay rate for linear damped systems. Discrete & Continuous Dynamical Systems - A, 2002, 8 (4) : 851-864. doi: 10.3934/dcds.2002.8.851
Bopeng Rao. Optimal energy decay rate in a damped Rayleigh beam. Discrete & Continuous Dynamical Systems - A, 1998, 4 (4) : 721-734. doi: 10.3934/dcds.1998.4.721
Micol Amar, Daniele Andreucci, Paolo Bisegna, Roberto Gianni. Homogenization limit and asymptotic decay for electrical conduction in biological tissues in the high radiofrequency range. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1131-1160. doi: 10.3934/cpaa.2010.9.1131
Pavol Quittner. The decay of global solutions of a semilinear heat equation. Discrete & Continuous Dynamical Systems - A, 2008, 21 (1) : 307-318. doi: 10.3934/dcds.2008.21.307
Zhiqiang Yang, Junzhi Cui, Qiang Ma. The second-order two-scale computation for integrated heat transfer problem with conduction, convection and radiation in periodic porous materials. Discrete & Continuous Dynamical Systems - B, 2014, 19 (3) : 827-848. doi: 10.3934/dcdsb.2014.19.827
Jong-Shenq Guo, Bei Hu. Blowup rate estimates for the heat equation with a nonlinear gradient source term. Discrete & Continuous Dynamical Systems - A, 2008, 20 (4) : 927-937. doi: 10.3934/dcds.2008.20.927
Zhuangyi Liu, Ramón Quintanilla. Energy decay rate of a mixed type II and type III thermoelastic system. Discrete & Continuous Dynamical Systems - B, 2010, 14 (4) : 1433-1444. doi: 10.3934/dcdsb.2010.14.1433
Abdelaziz Soufyane, Belkacem Said-Houari. The effect of the wave speeds and the frictional damping terms on the decay rate of the Bresse system. Evolution Equations & Control Theory, 2014, 3 (4) : 713-738. doi: 10.3934/eect.2014.3.713
Yoshikazu Giga, Yukihiro Seki, Noriaki Umeda. On decay rate of quenching profile at space infinity for axisymmetric mean curvature flow. Discrete & Continuous Dynamical Systems - A, 2011, 29 (4) : 1463-1470. doi: 10.3934/dcds.2011.29.1463
Yongming Liu, Lei Yao. Global solution and decay rate for a reduced gravity two and a half layer model. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2613-2638. doi: 10.3934/dcdsb.2018267
Belkacem Said-Houari Radouane Rahali
|
CommonCrawl
|
\begin{document}
\sloppy
\def\Mat#1#2#3#4{\left(\!\!\!\begin{array}{cc}{#1}&{#2}\\{#3}&{#4}\\ \end{array}\!\!\!\right)}
\def{\Bbb A}{{\Bbb A}} \def{\Bbb C}{{\Bbb C}} \def{\Bbb H}{{\Bbb H}} \def{\Bbb N}{{\Bbb N}} \def{\Bbb Q}{{\Bbb Q}} \def{\Bbb R}{{\Bbb R}} \def{\Bbb Z}{{\Bbb Z}} \def{\Bbb F}{{\Bbb F}} \def{\Bbb S}{{\Bbb S}} \def{\Bbb G}{{\Bbb G}} \def{\Bbb P}{{\Bbb P}} \def{\Bbb L}{{\Bbb L}}
\title{On mod $p^c$ transfer and applications}
\author{J. Mahnkopf, U. of Vienna}
\maketitle
{\small We study a mod $p^c$ analog of the notion of transfer for automorphic forms. Instead of existence of eigenforms, such transfers yield congruences between eigenforms but, like transfers, we show that they can be established by a comparison of trace formulas. This rests on the properties of mod $p^c$ reduced multiplicities which count congruences between eigenforms. As an application we construct finite slope $p$-adic {\it continuous} families of Siegel eigenforms using a comparison of trace formulas. }
{\bf (0.1) } In this article we were motivated by the idea of the {comparison} of trace formulas as a universal principle to relate the existence of different types of automorphic representations. If such an idea holds true then the comparison of trace formulas should be applicable to a wider class of statements relating different types of automorphic representations. Probably the most important statement of this kind is the functoriality principle. This principle yields a transfer from automorphic representations on a reductive group $G$ to the set of automorphic representations on another reductive group $G'$. Thus, it relates the existence of automorphic representations on different groups and special cases of functoriality have been proven via a comparison of trace formulas.
Another example of such a statement which also is of a general natural is the theory of $p$-adic families of automorphic forms. Given an eigenform $f_0$ it predicts the existence of a $p$-adic analytic family of eigenforms passing through $f_0$, i.e. it predicts the existence of infinitely many eigenforms which are related to each other and not only to $f_0$ and which are determined by $f$ only modulo a power of $p$ (but which are all on the same group $G$). Thus, like the functoriality principle, the theory of $p$-adic families relates the existence of automorphic forms (in varying weights) but it is of a different nature. An approach based on a comparison of trace formulas therefore is not obvious but would confirm the idea of the comparison of trace formulas as a universal principle.
{\bf (0.2) } In this article we describe a comparison of trace formulas which solves the problem of existence of {\it continuous} families of finite slope. Since a continuous family is defined by a system of congruences between its eigenforms we need a comparison of trace formulas which instead of existence of eigenforms yields congruences between eigenforms. We are led to such a kind of comparison by considering a mod $p^c$ analog of the notion of transfer for automorphic forms. Unlike transfers, a mod $p^c$ transfer only yields a mod $p^c$ approximation to the predicted eigenform, thus, it yields a {\it congruence} which is satisfied by an eigenform.
Like transfers, mod $p^c$ transfers can be established by a comparison of trace formulas, i.e. the existence of a mod $p^c$ transfer follows from certain congruences between traces of Hecke operators.
This is based on the properties of mod $p^c$ reduced multiplicities. Unlike multiplicities, reduced multiplicities count congruences between eigenforms, hence, their relation to mod $p^c$ transfers. On the other hand, like multiplicities, they can be computed as traces of certain Hecke operators, hence, their relation to the trace formula; cf. (0.3) for more details.
In part B we verify the necessary congruences for the group ${\rm GSp}_{2n}$ by comparing the geometric sides of two simple topological trace formulas. This essentially comes down to a problem about eigenvalues of certain symplectic matrices (cf. section 7, in particular (7.1) Lemma).
We only construct continuous families of eigenforms but our method is very different from the ones used in the construction of analytic families. In particular, we do not make use of overconvergent cohomology, $p$-adic Fredholm theory or rigid analytic geometry and the proof is of an elementary nature. We hope that the present method also might apply to yield analyticity of families.
Since, here, we compare two trace formulas on the same group, we avoid the deep problems which have to be solved if the trace formula is applied to the functoriality principle.
{\bf (0.3) Mod $p^c$-transfer. } We explain the mod $p^c$ transfer on which our construction of $p$-adic families is based in more detail. Let ${\cal H}$, ${\cal H}'$ be free commutative ${\Bbb Z}$-algebras in countably many generators (e.g. Hecke algebras attached to reductive groups $G,G'$) and denote by $\hat{\cal H}$ the set of characters $\Theta:\,{\cal H}\rightarrow\bar{\Bbb Q}_p$. For any ${\cal H}$-module ${\bf H}$ we denote by ${\cal E}({\bf H})\subseteq\hat{\cal H}$ the set of {eigencharacters} occuring in ${\bf H}$, i.e. ${\cal E}({\bf H)}$ consists of all characters $\Theta$ such that the corresponding generalized (simultaneous) eigenspace ${\bf H}(\Theta)$ does not vanish. Let $\Phi:\,{\cal H}'\rightarrow{\cal H}$ be an algebra morphism and denote by $$ \Phi^\vee:\,\hat{\cal H}\rightarrow\hat{\cal H}' $$ the dual map; cf. (2.1). Let ${\bf H}$ resp. ${\bf H}'$ be a ${\cal H}$ resp. a ${\cal H}'$-module. The fundamental problem is to examine if $\Phi^\vee$ defines a map on {\it eigencharacters} $\Phi^\vee:\,{\cal E}({\bf H})\rightarrow{\cal E}({\bf H}')$. (If ${\bf H},{\bf H}'$ are Hecke modules of automorphic forms this means to examine whether $\Phi^\vee$ defines a transfer for automorphic representations.) In this article, with the application to the theory of $p$-adic families in mind, we want to examine whether $\Phi^\vee$ defines a map on eigencharacters if we reduce modulo a given power of $p$. More precisely, we want to examine if there is a map $$ \Psi^{[c]}:\,{\cal E}({\bf H})\rightarrow{\cal E}({\bf H}')\leqno(1) $$ satisfying $\Psi^{[c]}(\Theta)\equiv\Phi^\vee(\Theta)\pmod{p^c}$ for all $\Theta\in{\cal E}({\bf H})$. We show that the existence of such a mod $p^c$ transfer $\Psi^{[c]}$ corresponding to $\Phi$ can be established by a comparison of trace formulas if we replace the notion of multiplicity by that of a mod $p^c$ reduced multiplicity. The mod $p^c$ reduced multiplicity of $\Theta\in\hat{\cal H}$ is defined as $$ m_{\bf H}(\Theta,c)=\sum_{\mu\equiv\Theta\pmod{p^c}} {\rm dim}\, {\bf H}(\mu), $$ where $\mu$ runs over all characters of ${\cal H}$ which are congruent to $\Theta$ modulo $p^c$. Thus, $m_{\bf H}(\Theta,c)$ counts the number of eigencharacters of ${\bf H}$ which are congruent to $\Theta$ mod $p^c$. In particular, if $$ m_{\bf H}(\Theta,c)=m_{{\bf H}'}(\Phi^\vee(\Theta),c)\leqno(2) $$ for all $\Theta\in\hat{\cal H}$ then for any eigencharacter $\mu\in{\cal E}({\bf H})$ there is an eigencharacter $\mu'\in{\cal E}({\bf H}')$ such that $\mu'\equiv\Phi^\vee(\mu)\pmod{p^{c}}$, i.e. a mod $p^c$ transfer $\Psi^{[c]}$ correspondong to $\Phi$ exists. On the other hand, it is crucial that the reduced multiplicities in (2) can be computed as traces, i.e. there is an element $e'\in{\cal H}'$ such that $$
{\rm tr}(\Phi(e')|{\bf H})\equiv m_{\bf H}(\Theta,c) \quad\mbox{and}\quad{\rm tr}\,(e'|{\bf H}')\equiv m_{{\bf H}'}(\Phi^\vee(\Theta),c) $$ modulo a "high" power of $p$; cf. (2.3) Lemma and (2.4) Remark. Using this we obtain:
{\bf Theorem} (cf. 2.4 Theorem). {\it Let ${\rm dim}\,{\bf H},\,{\rm dim}\,{\bf H}'\le \frac{M}{2}$ and assume that there is an $s$ such that $$
{\rm tr}\,(\Phi(T')|{\bf H})\equiv{\rm tr}\,(T'|{\bf H}')\pmod{p^s}\leqno(3) $$ for all $T'\in{\cal H}'$. Then for any $\Theta\in\hat{\cal H}$ there is $c=c(\Theta)>\frac{s}{M}-(M+2) \log_p M$ such that equation (2) holds.
In particular, a mod $p^{\frac{s}{M}-(M+2)\log_p M}$ transfer corresponding to $\Phi$ exists. }
{\bf (0.4) Application to $p$-adic families. } The theory of $p$-adic continuous families is a special case of mod $p^c$ transfers. To explain this, let ${\bf H}_\lambda$ be a family of ${\cal H}$-modules indexed by their ``weight'' $\lambda\in {\Bbb Z}^n$. If there are ${\sf a},{\sf b}$ such that a congruence $\lambda\equiv\lambda_0\pmod{(p-1)p^m}$ implies that there is a mod $p^{{\sf a}(m+1)+{\sf b}}$-transfer $$ \Psi_\lambda:\,{\cal E}({\bf H}_{\lambda_0})\rightarrow{\cal E}({\bf H}_\lambda) $$ corresponding to the identity map $\Phi={\rm id}$, i.e. $\Psi_\lambda(\Theta)\equiv\Theta\pmod{p^{{\sf a}(m+1)+{\sf b}}}$ then the collection of transfers $(\Psi_\lambda(\Theta_0))_\lambda$ is a $p$-adic contionuous family passing through a given initial eigencharacter $\Theta_0\in{\cal E}({\bf H}_{\lambda_0})$ (cf. (1.9) Proposition). Thus, the existence of $p$-adic continuous families follows from a system of congruences of the type in (3) (cf. (3.7) Proposition).
In part B we show that the family of slope subspaces of the cohomology of Siegel upper half plane satisfies these congruences by comparing trace formulas (cf. (7.5) Theorem). This the main technical work. As a consequence we obtain
{\bf Corollary } (cf. (7.7) Corollary). {\it Any Siegel modular eigenform $f_0$ of slope $\alpha$ fits in a $p$-adic continuos family of eigenforms of slope $\alpha$.
}
We also obtain local constancy of the dimension of the slope spaces.
{\bf (0.5) } Starting with the work of Hida (cf. [H 1], [H 2]) $p$-adic families of automorphic eigenforms have been constructed by several authors; we mention Hida, Ash-Stevens, Buzzard, Coleman, Emerton, Tilouine, Urban, Harder ... Moreover, there is the work of Koike [K] who applies an explicit Selberg trace formula to prove mod $p^m$ congruences between the traces of Hecke operators on the space of elliptic modular forms for varying weights $k$. Since he only considers the Hecke operators $T(p^m)$ at $p$, he can not make statements concerning (existence of) eigenforms, hence, there is no kind of transfer or relation between eigenforms in different weights and, consequently, he does not have to set up a comparison of trace formulas as we described in part A (and which involves comparing traces of all Hecke operators).
We also mention work of Urban who $p$-analytically interpolates the traces of Hecke operators for varying weight. Here, Franke's trace formula is applied in the construction but as far as we understand in an technical way to reduce from the whole spectrum to the cuspidal spectrum; essentially, his construction is based on the work of [A-S], in particular, on their notion of overconvergent cohomology. Thus, like Koike he does not set up a comparison of trace formulas (note that Franke's trace formula only has a spectral side and no geometric side, hence, it cannot function in a comparison of trace formulas) and his work seems to be very different from ours.
Part of this article has been described in [Ma 2,3].
\centerline{\bf \Large A. Reduced Multiplicities}
\section{Mod $p^c$ reduced Multiplicities}
We define mod $p^c$ reduced multiplicities and we describe their connection to congruences between eigenforms.
{\bf (1.1) } We fix a prime $p\in{\Bbb N}$. We denote by $v_p$ the $p$-adic valuation on $\bar{\Bbb Q}_p$ normalized by $v_p(p)=1$. We write ${\cal O}$ for the ring of integers in $\bar{\Bbb Q}_p$ and we say that $x\equiv y\pmod{p^t}$, $t\in{\Bbb R}$, if $v_p(x-y)\ge t$ ($x,y\in\bar{\Bbb Q}_p$). Finally, $\lceil x\rceil$ denotes the smallest integer larger than or equal to $x$ and $\log_p$ is the complex logarithm with base $p$.
{\bf (1.2) } We let ${\cal H}=\bar{\Bbb Q}_p[T_\ell,\,\ell\in I]$ be the polynomial algebra over $\bar{\Bbb Q}_p$ generated by a countable number of elements $T_\ell$, $\ell\in I$. We set ${\cal H}_{\cal O}={\cal O}[T_\ell,\,\ell\in I]$, hence, ${\cal H}={\cal H}_{\cal O}\otimes\bar{\Bbb Q}_p$ and ${\cal H}_{\cal O}$ is an order in ${\cal H}$. We denote by $\hat{\cal H}_{{\cal O}}$ the set of all $\bar{\Bbb Q}_p$-algebra characters $\Theta:\,{\cal H}\rightarrow \bar{\Bbb Q}_p$
which are defined over ${\cal O}$, i.e. which satisfy $\Theta({\cal H}_{\cal O})\subseteq{\cal O}$.
In the following we will also understand by $\Theta\in\hat{\cal H}_{{\cal O}}$ the induced character $$ \Theta:\,{\cal H}_{\cal O}\rightarrow {\cal O} $$ given by restriction of $\Theta$ to ${\cal H}_{\cal O}$.
Any $\Theta\in\hat{\cal H}_{{\cal O}}$ is determined by the collection of values $\Theta_\ell:=\Theta(T_\ell)$, $\ell\in I$, and we obtain an embedding $$ \hat{\cal H}_{{\cal O}}\hookrightarrow {\cal O}^I. $$
For any character $\Theta$ of ${\cal H}$ we set $$ v_p(\Theta)={\rm inf}_{T\in{\cal H}_{\cal O}} \, v_p(\Theta(T))\in{\Bbb Q}\cup\{-\infty\}. $$ We note that $\Theta\in\hat{\cal H}_{\cal O}$ precisely if $v_p(\Theta)>-\infty$ and in this case we obtain $$ v_p(\Theta)={\rm inf}_{\ell\in I} \, v_p(\Theta_{\ell})\in{\Bbb Q}_{\ge 0}. $$
We say that $\Theta$ is congruent to a character $\mu$ of ${\cal H}$ modulo $p^t$, $t\in{\Bbb R}$, written as $\Theta\equiv \mu\pmod{p^t}$, if $$ v_p(\Theta-\mu)\ge t. $$
{\bf (1.3) } Let ${\bf H}$ be a ${\cal H}$-module which is finite dimensional as $\bar{\Bbb Q}_p$-vector space. We assume that ${\bf H}$ is defined over ${\cal O}$ meaning that ${\bf H}$ contains a ${\cal O}$-submodule ${\bf H}_{{\cal O}}$ which is free and stable under the action of ${\cal H}_{{\cal O}}$ such that ${\bf H}={\bf H}_{\cal O}\otimes\bar{\Bbb Q}_p$.
For any $\Theta\in\hat{\cal H}_{{\cal O}}$ we denote by $$ {\bf H}(\Theta)=\{v\in {\bf H}:\,\mbox{ for all $T\in{\cal H}$ there is $n_T\in{\Bbb N}$ such that}\;(T-\Theta(T))^{n_T}(v)=0\} $$ the generalized simultaneous eigenspace attached to the character $\Theta$ (or, equivalently, to the (system of) eigenvalue(s) $\Theta=(\Theta_{\ell})_{{\ell}\in I}$). We denote by ${\cal E}({\bf H})={\cal E}_{\cal H}({\bf H})$ the set of all eigencharacters occuring in ${\bf H}$, i.e. ${\cal E}({\bf H})$ consists of all characters $\Theta\in\hat{\cal H}_{\cal O}$ such that ${\bf H}(\Theta)\not=0$. Thus, elements in ${\cal E}({\bf H})$ correspond to simultaneous ${\cal H}$-eigenforms in ${\bf H}$. We note that any character $\Theta$ of ${\cal H}$ which occurs in ${\bf H}$ is defined over ${\cal O}$ because ${\bf H}$ is defined over ${\cal O}$, hence, $\Theta\in\hat{\cal H}_{\cal O}$.
Since ${\cal H}$ is commutative we thus obtain a decomposition $$ {\bf H}=\bigoplus_{\Theta\in{\cal E}({\bf H})} {\bf H}(\Theta). $$
Let ${\bf H}$ be a ${\cal H}$-module which is finite dimensional as $\bar{\Bbb Q}_p$-vector space and which is defined over ${\cal O}$ with respect to the free ${\cal O}$-submodule ${\bf H}_{\cal O}$.
{\bf (1.4) Definition. } Let $\Theta\in\hat{\cal H}_{{\cal O}}$ and let $c\in{\Bbb Q}$. We define the $\pmod{p^c}$-reduced multiplicity of $\Theta$ in ${\bf H}$ as $$ {m}_{\bf H}(\Theta,c,p)=\sum_{\mu\in\hat{\cal H}_{{\cal O}}\atop \mu\equiv\Theta\pmod{p^c}} {\rm dim}\,{\bf H}(\mu). $$ If the prime $p$ is understood we will omit "$p$" and write more simply ${m}_{\bf H}(\Theta,c)$ instead.
{\bf (1.5) Remark. }
The reduced multiplicities are related to the multiplicity ${m}_{\bf H}(\Theta):={\rm dim}\,{\bf H}(\Theta)$ by $$ \lim_{c\rightarrow \infty\atop c\in{\Bbb N}} {m}_{\bf H}(\Theta,c)={m}_{\bf H}(\Theta). $$
{\bf (1.6) Mod ${p^c}$ reduced multiplicities and mod $p^c$ reduction. } We set ${\bf H}_{{\cal O}}(\mu)={\bf H}(\mu)\cap {\bf H}_{{\cal O}}$, hence, $$ \bigoplus_{\mu\in{\cal E}({\bf H})}{\bf H}_{\cal O}(\mu)\subseteq{\bf H}_{\cal O}. $$ We let $c\in{\Bbb Q}_{\ge 0}$ and define the ideal ${\mathfrak a}=\{x\in{\cal O}:\,v_p(x)\ge c\}\le{\cal O}$. We denote by $$ \bar{\cal O}=\frac{\cal O}{\mathfrak a},\quad \bar{\cal H}_{\cal O}=\frac{{\cal H}_{\cal O}}{{\mathfrak a}{\cal H}_{\cal O}}\quad\mbox{and}\quad\bar{\bf H}_{{\cal O}} =\frac{{\bf H}_{{\cal O}}}{{\mathfrak a} {\bf H}_{{\cal O}}} $$ the ${\rm mod}\,{\mathfrak a}$-reductions and we denote by $\bar{\bf H}_{{\cal O}}(\mu)$ the image of ${\bf H}_{{\cal O}}(\mu)$ in $\bar{\bf H}_{{\cal O}}$. Hence, $\bar{\bf H}_{{\cal O}}$ and $\bar{\bf H}_{{\cal O}}(\mu)$ are $\bar{\cal H}_{\cal O}$-modules.
We assume that ${\bf H}_{\cal O}(\mu)\le{\bf H}_{\cal O}$ is a free ${\cal O}$-submodule (in our later applications ${\bf H}$ will be defined over a finite extension $E/{\Bbb Q}_p$, hence, we may replace ${\cal O}$ by ${\cal O}_E$ which is a P.I.D and the assumption holds). Since ${\bf H}_{\cal O}(\mu)\le{\bf H}_{\cal O}$ is saturated,
$\bar{\bf H}_{{\cal O}}(\mu)$ is a free $\bar{\cal O}$-module
which is annihilated by $(T-\bar{\mu}(T))^{n_T}$ for all $T\in\bar{\cal H}_{\cal O}$; here, $\bar{\mu}=\mu\,{\rm mod}\,p^c:\, \bar{\cal H}_{\cal O}\rightarrow\bar{\cal O}$, $T+{\mathfrak a}{\cal H}_{\cal O}\mapsto \mu(T)+{\mathfrak a}$ is the ${\rm mod}\,{\mathfrak a}$-reduced character.
If the decomposition of ${\bf H}$ as a sum of generalized eigenspaces is defined over $\cal O$, i.e. if $\bigoplus_{\mu\in{\cal E}({\bf H})} {\bf H}_{{\cal O}}(\mu)={\bf H}_{{\cal O}}$ then $$ \bigoplus_{\mu\in{\cal E}({\bf H})}\bar{\bf H}_{\cal O}(\mu)=\bar{\bf H}_{\cal O}\leqno(1) $$
as $\bar{\cal O}$-modules and, hence, we obtain for the multiplicity of ${\Theta}\,{\rm mod}\,p^c$ in $\bar{\bf H}_{\cal O}$ $$ m_{\bar{\bf H}_{{\cal O}}}(\Theta\,{\rm mod}\,p^c):={\rm dim}_{\bar{\cal O}}\,\sum_{\mu\in\hat{\cal H}_{\cal O}\atop\,\bar{\mu}=\bar{\Theta}}\bar{\bf H}_{\cal O}(\mu)=m_{\bf H}(\Theta,c,p). $$
In general, we only obtain an inclusion of a (not necessarily direct) sum $$ \sum_{\mu\in{\cal E}({\bf H})}\bar{\bf H}_{\cal O}(\mu)\subseteq\bar{\bf H}_{\cal O}, $$
which yields an inequality $$ m_{\bar{\bf H}_{{\cal O}}}(\Theta\,{\rm mod}\,p^c)\le m_{\bf H}(\Theta,c,p). $$ In this sense, the reduced multiplicity ${m}_{\bf H}(\Theta,c,p)$ is a substitute for the multiplicity of the mod $p^c$-reduction of ${\Theta}$ in $\bar{H}_{{\cal O}}$ if the primary decomposition of ${\bf H}$ is not defined over ${\cal O}$.
{\bf (1.7) Higher Congruences }. Let ${\cal H}=\bar{\Bbb Q}_p[T_\ell,\,\ell\in I]$ and ${\cal H}'=\bar{\Bbb Q}_p[T_\ell,\ell\in I']$ and let ${\bf H}$ resp. ${\bf H}'$ be a ${\cal H}$ resp. ${\cal H}'$-module which is defined over ${\cal O}$ with respect to the lattice ${\bf H}_{\cal O}$ resp. ${\bf H}'_{\cal O}$ as in (1.3). Let $$ \Phi^\vee:\,\hat{\cal H}_{\cal O}\rightarrow\hat{\cal H}'_{\cal O} $$ be a map and let $m\in{\Bbb N}$.
If for any $\Theta\in\hat{\cal H}_{\cal O}$ there is a rational number $c=c(\Theta)\ge m$ such that $$ m_{\bf H}(\Theta,c)=m_{{\bf H}'}(\Phi^\vee(\Theta),c) $$ then for any $\Theta\in{\cal E}({\bf H})$ there is a eigencharacter $\Theta'\in{\cal E}({\bf H}')$ such that $$ \Theta'\equiv\Phi^\vee(\Theta)\pmod{p^c}. $$ In different words there is a map on {\it eigencharacters} $$ \Psi^{[c]}:\,{\cal E}_{\cal H}({\bf H})\rightarrow{\cal E}_{{\cal H}'}({\bf H}') $$ such that $\Psi^{[c]}(\Theta)\equiv \Phi^\vee(\Theta)\pmod{p^c}$. Thus, by comparing reduced multiplicities we can establish congruences between eigencharacters in ${\cal E}({\bf H}')$ and lifts of eigencharacters in ${\cal E}({\bf H})$ or, equivalently, a mod $p^c$ transfer from ${\cal E}_{\cal H}({\bf H})$ to ${\cal E}_{{\cal H}'}({\bf H}')$. In particular, if ${\cal H}={\cal H}'$ and $\Phi^\vee={\rm id}$ then the set of identities $$ m_{\bf H}(\Theta,c)=m_{{\bf H}'}(\Theta,c),\quad\Theta\in\hat{\cal H}_{\cal O}, $$ implies that for any $\Theta\in{\cal E}({\bf H})$ a congruence $$ \Theta\equiv\Theta'\pmod{p^m} $$ holds for some $\Theta'\in{\cal E}({\bf H}')$.
{\bf (1.8) Systems of higher congruences. } Slightly refining the above discussion we can give a set of simple identitites between mod $p^c$ reduced multiplicities which implies the existence of $p$-adic {\it continuous} families of eigencharacters (note that such families are defined by a system of congruences between their members). To be more precise, we let ${\bf G}/{\Bbb Q}$ be a connected reductive algebraic group with maximal split torus ${\bf T}$ and we denote by $X({\bf T})$ the group of ${\Bbb Q}$-characters of ${\bf T}$. $X({\bf T})$ is a finitely generated, free abelian group which we write using additive notation.
For any $\lambda\in X({\bf T})$ we denote by $v_p(\lambda)$ the largest integer $m$ such that $\lambda\in p^m X({\bf T})$. Thus, if we identify $X({\bf T})\cong {\Bbb Z}^k$ via the choice of a basis $(\gamma_i)_i$ of $X({\bf T})$ then $v_p(\sum_i z_i \gamma_i)=\inf_{i} v_p(z_i)$.
{\bf (1.9) Proposition. }{\it Let ${\cal R}\subset X({\bf T})$ be a subset and let $({\bf H}_\lambda)$, $\lambda\in {\cal R}$, be a family of finite dimensional ${\cal H}$-modules which are defined over ${\cal O}$. Assume there are ${\sf a},{\sf b}\in{\Bbb Q}$ with the following property: if $\lambda\equiv\lambda'\pmod{(p-1)p^m X({\bf T})}$, then for any $\Theta\in\hat{\cal H}_{\cal O}$ there is $c=c(\Theta)\ge {\sf a}(m+1)+{\sf b}$ with $$ m_{{\bf H}_\lambda}(\Theta,c)=m_{{\bf H}_{\lambda'}}(\Theta,c), $$ i.e. the transfer $\Psi^{[{\sf a}(m+1)+{\sf b}]}:\,{\cal E}({\bf H}_{\lambda})\rightarrow {\cal E}({\bf H}_{\lambda'})$ corresponding to $\Phi^\vee={\rm id}$ exists. Then, any $\Theta\in{\cal E}({\bf H}_{\lambda_0})$ fits in a $p$-adic continuous family of eigencharacters, i.e. there is a family $(\Theta_\lambda)$, $\lambda\in{\cal R}$, such that
\begin{itemize}
\item $\Theta_\lambda\in{\cal E}({\bf H}_\lambda)$
\item $\Theta_{\lambda_0}=\Theta$
\item $\lambda\equiv\lambda'\pmod{(p-1)p^m X({\bf T})}$ implies $\Theta_\lambda\equiv\Theta_{\lambda'}\pmod{p^{{\sf a}(m+1)+{\sf b}}}$.
\end{itemize}
}
{\it Proof. } For any weight $\mu\in X({\bf T})$ we set ${\cal R}_\mu=\{\lambda\in{\cal R}:\,\lambda\equiv\mu\pmod{(p-1) X({\bf T})}\}$. We first construct a $p$-adic family $(\Theta_\lambda)_\lambda$ satisyfing the above conditions with $\lambda$ only running over ${\cal R}_{\lambda_0}$. To this end, we enumerate the weights $\lambda$ in ${\cal R}_{\lambda_0}$ in a sequence $\lambda_0,\lambda_1,\lambda_2,\lambda_3,\ldots$.
We inductively construct elements $\Theta_{\lambda_i}\in{\cal E}({\bf H}_{\lambda_i})$, $i=0,1,2,3,\ldots$ such that $\Theta_{\lambda_0}=\Theta$ and ${\lambda_i}\equiv {\lambda_j}\pmod{(p-1)p^m X({\bf T})}$ implies $\Theta_{\lambda_i}\equiv \Theta_{\lambda_j}\pmod{p^{{\sf a}(m+1)+{\sf b}}}$. Clearly, we set $\Theta_{\lambda_0}=\Theta$. Assume that $\Theta_{\lambda_0},\ldots,\Theta_{\lambda_n}$ have been defined such that $\lambda_i\equiv \lambda_j\pmod{(p-1)p^m X({\bf T})}$ implies that $\Theta_{\lambda_i}\equiv \Theta_{\lambda_j}\pmod{p^{{\sf a}(m+1)+{\sf b}}}$ for all $i,j=0,\ldots,n$. To define $\Theta_{\lambda_{n+1}}$ we select $a\in\{0,1,2,\ldots,n\}$ such that $$ v_p(\lambda_{n+1}-\lambda_a)\ge v_p(\lambda_{n+1}-\lambda_i)\quad\mbox{for all}\; i=0,\ldots,n $$ We set $w_1=v_p(\lambda_{n+1}-\lambda_a)$, hence, $\lambda_{n+1}-\lambda_a\in (p-1)p^{w_1} X({\bf T})$ (note that $\lambda_a-\lambda_{n+1}\in (p-1)X({\bf T})$ because $\lambda_{n+1},\lambda_a\in{\cal R}_{\lambda_0}$). By (2.4) Remark there is $\Theta\in{\cal E}({\bf H}_{\lambda_{n+1}})$ such that $\Theta\equiv\Theta_{\lambda_a}\pmod{p^{{\sf a}(w_1+1)+{\sf b}}}$. We then set $\Theta_{\lambda_{n+1}}$ equal to this $\Theta$.
Let $i\in\{0,\ldots,n\}$ be arbitrary and set $w_3=v_p(\lambda_{n+1}-\lambda_i)$, hence, $\lambda_{n+1}\equiv\lambda_i\pmod{(p-1)p^{w_3} X({\bf T})}$. We have to show that $\Theta_{\lambda_{n+1}}\equiv \Theta_{\lambda_i}\pmod{p^{{\sf a}(w_3+1)+{\sf b}}}$. To this end we set $w_2=v_p(\lambda_a-\lambda_i)$. $$ \left.\begin{array}{cccc} &&&\bullet \lambda_{n+1}\\ w_1\{&&/&\\ &\lambda_a\bullet&&\\
w_2\{&|&&\\ &\lambda_i\bullet&&\\ \end{array}\right\}w_3. $$ We know that $\Theta_{\lambda_{n+1}}\equiv \Theta_{\lambda_a}\pmod{p^{{\sf a}(w_1+1)+{\sf b}}}$ by definition of $\Theta_{\lambda_{n+1}}$ and that $\Theta_{\lambda_a}\equiv \Theta_{\lambda_i}\pmod{p^{{\sf a}(w_2+1)+{\sf b}}}$ by our induction hypotheses, hence, $$ \Theta_{\lambda_{n+1}}\equiv \Theta_{\lambda_i}\pmod{p^{{\sf a}({\rm min}\{w_1,w_2\}+1)+{\sf b}}}.\leqno(2) $$ We distinguish cases.
{\it Case A} $w_2> w_1$. In this case ${\rm min}\{w_1,w_2\}=w_1$ and $w_3=w_1$ by the $p$-adic triangle inequality. Hence, equation (2) implies that $\Theta_{\lambda_{n+1}}\equiv \Theta_{\lambda_i}\pmod{p^{{\sf a}(w_3+1)+{\sf b}}}$.
{\it Case B} $w_2< w_1$. In this case ${\rm min}\{w_1,w_2\}=w_2$ and $w_3=w_2$. Hence, equation (2) implies that $\Theta_{\lambda_{n+1}}\equiv \Theta_{\lambda_i}\pmod{p^{{\sf a}(w_3+1)+{\sf b}}}$.
{\it Case C} $w_2=w_1$. In this case ${\rm min}\{w_1,w_2\}=w_1$. On the other hand, by the choice of $a$ we know that $w_1\ge w_3$; thus equation (2) yields $\Theta_{\lambda_{n+1}}\equiv \Theta_{\lambda_i}\pmod{p^{{\sf a}(w_3+1)+{\sf b}}}$.
Thus, $(\Theta_\lambda)_\lambda$, $\lambda\in{\cal R}_{\lambda_0}$, is a $p$-adic continuous family. To obtain a $p$-adic family $(\Theta_\lambda)$ with $\lambda$ running through all of ${\cal R}$ we denote by $\{\mu_0=\lambda_0,\mu_1,\ldots,\mu_r\}$ a system of representatives for $X({\bf T})/(p-1) X({\bf T})$. For any $i=1,\ldots,r$ we construct in the same way as above a $p$-adic family $(\Theta_\lambda)_\lambda$ with $\lambda$ running through ${\cal R}_{\mu_i}$. Since $\mu_i\not\equiv\mu_j\pmod{(p-1)X({\bf T})}$ if $i\not=j$ the union $(\Theta_\lambda)$, $\lambda\in\bigcup_i {\cal R}_{\mu_i}$ then is a $p$-adic family satisfying the requirements of the Proposition. This completes the proof.
\section{Mod $p^c$ Transfer}
We show that reduced multiplicities can be computed as traces of certain operators and we use this to establish a ``mod $p^c$ transfer'' for eigencharacters.
{\bf (2.1) The dual map. } We let ${\cal H}=\bar{\Bbb Q}_p[T_\ell,\,\ell\in I]$ and ${\cal H}'=\bar{\Bbb Q}_p[T_\ell,\,\ell\in I']$ be countably generated polynomial algebras algebras and we set ${\cal H}_{\cal O}={\cal O}[T_\ell,\,\ell\in I]$ and ${\cal H}'_{\cal O}={\cal O}[T_\ell,\,\ell\in I']$. We assume that there is a morphism of ${\cal O}$-algebras $$ \Phi:\,{\cal H}'_{\cal O}\rightarrow {\cal H}_{\cal O}. $$ The morphism $\Phi$ induces a dual map $$ \Phi^\vee:\,\hat{\cal H}_{\cal O}\rightarrow\hat{\cal H}'_{\cal O}\leqno(1) $$ by sending $\mu$ to $\Phi^\vee(\mu)=\mu\circ \Phi$, i.e. the diagram $$ \begin{array}{ccc} {\cal H}&\stackrel{\Phi}{\leftarrow}&{\cal H}'\\ \mu\;\searrow&&\swarrow\;\Phi^\vee(\mu)\\ &\bar{\Bbb Q}_p&\\ \end{array} $$ commutes. Thus, for any $v_\mu\in {\bf H}(\mu)$, $\mu\in\hat{\cal H}_{\cal O}$, and any $T'\in{\cal H}'$ we obtain $$ \Phi(T') v_\mu=\Phi^\vee(\mu)(T')v_\mu. $$
{\bf (2.2) Remark. } Let ${\bf H}$ resp. ${\bf H}'$ be a ${\cal H}$ resp. ${\cal H}'$-module which is defined over ${\cal O}$ with respect to the lattice ${\bf H}_{\cal O}$ resp. ${\bf H}'_{\cal O}$. In general, $\Phi^\vee$ does not induce a mapping on eigencharacters $$ \Phi^\vee:\,{\cal E}({\bf H})\rightarrow{\cal E}({\bf H}'). $$ We want to examine whether this is the case modulo powers of $p$, i.e. whether for any $\Theta\in{\cal E}({\bf H})$ there is a $\Theta'\in{\cal E}({\bf H}')$ such that $$ \Phi^\vee(\Theta)\equiv \Theta'\pmod{p^c}. $$ This is equivalent to the existence of a map $$ \Psi^{[c]}:\,{\cal E}({\bf H})\rightarrow{\cal E}({\bf H}') $$ such that $\Psi^{[c]}(\Theta)\equiv \Phi^\vee(\Theta)\pmod{p^c}$ for all $\Theta\in{\cal E}({\bf H})$. By (1.7) we can establish the existence of the map $\Psi^{[c]}$ by comparing the mod $p^c$ reduced multiplicties of $\Theta$ and $\Phi^\vee(\Theta)$ in ${\bf H}$ and ${\bf H}'$, i.e. we need to compute reduced multiplicities. This will be based on the following Lemma which expresses the reduced multiplicites $m_{\bf H}(\Theta,c)$ and $m_{{\bf H}'}(\Phi^\vee (\Theta),c)$ as traces of a certain element in ${\cal H}'$.
{\bf (2.3) Reduced multiplicities as traces. } From now on we assume that $\Phi:\,{\cal H}'_{\cal O}\rightarrow{\cal H}_{\cal O}$ is {\it surjective}.
{\bf Lemma. } {\it Assume that ${\rm dim}\,{\bf H},\,{\rm dim}\,{\bf H}'\le \frac{1}{2} M$ for some $M\in 2{\Bbb N}$ and let $m\in {\Bbb N}$. For any $\Theta\in\hat{\cal H}_{\cal O}$ there is an element $e(\Theta)\in{\cal H}'$ and a rational number $c=c(\Theta)\ge m-(M+\frac{3}{2})\log_p M$ such that the following holds
\begin{itemize}
\item $e(\Theta)\in\frac{1}{\xi}{\cal H}_{\cal O}'$, where $\xi\in{\cal O}$ with $v_p(\xi)\le Mm$
\item ${\rm tr}\,(e(\Theta)|{\bf H}')\equiv m_{{\bf H}'}(\Phi^\vee(\Theta),c)\pmod{p^{\log_p M}}$
\item ${\rm tr}\,(\Phi(e(\Theta))|{\bf H})\equiv m_{\bf H}(\Theta,c)\pmod{p^{\log_p M}}.$
\end{itemize}
(We note that $\log_p M\in{\Bbb R}_{\ge 0}$ and the congruence has to be understood as in (1.1).) }
{\it Proof. } Let $\Theta\in\hat{\cal H}_{\cal O}$. We proceed in steps.
a.) We first define $c$. We abbreviate $l=\log_p M$.
We set $$ \Omega=\Omega(\Theta)=\{v_p(\mu-\Theta),\,\mu\in{\cal E}({\bf H})\}\cup \{v_p(\mu'-\Phi^\vee(\Theta)),\,\mu'\in{\cal E}({\bf H}')\}\subseteq{\Bbb Q} $$ and we define the interval $$ {\bf I} =\{r\in{\Bbb Q}:\,m-(M+3/2)l\le r\le m\}\subset{\Bbb Q}_{\ge 0}. $$
Since ${\bf I}$ has length $(M+3/2)l$ and since $|\Omega|\le|{\cal E}({\bf H})|+|{\cal E}({\bf H}')|\le {\rm dim}\,{\bf H}+{\rm dim}\,{\bf H}'\le M$ there is a $c\in{\bf I}$ such that $$ [c,c+l]\cap \Omega=\emptyset. $$
Thus, if $\gamma\in{\cal E}({\bf H})$ with $v_p(\gamma-\Theta)\ge c$ then we know that $v_p(\gamma-\Theta)\in\Omega\cap[c,\infty)$, hence, $$ v_p(\gamma-\Theta)> c+l.\leqno(2) $$ Similarly, if $\gamma'\in {\cal E}({\bf H}')$ with $v_p(\gamma'-\Phi^\vee(\Theta))\ge c$ then $$ v_p(\gamma'-\Phi^\vee(\Theta))> c+l.\leqno(2') $$ We note that the number $c$ obviously satisfies $$ m-(M+3/2)l\le c\le m. $$
b.) Next we define $e(\Theta)$. We note that for any $\mu\in{\cal E}({\bf H})$ with $\mu\not\equiv \Theta\pmod{p^c}$, i.e. $v_p(\mu-\Theta)<c$, there is a element $T_\mu'\in{\cal H}'_{\cal O}$ such that $$ \mu(\Phi(T_\mu'))\not\equiv \Theta(\Phi(T_\mu'))\pmod{p^c}. $$ (note that we assume $\Phi$ to be surjective); hence, $$ \Phi^\vee(\mu)(T_\mu')\not\equiv \Phi^\vee(\Theta)(T_\mu')\pmod{p^c}.\leqno(3) $$ Similarly, for any $\mu'\in {\cal E}({\bf H}')$ with $\mu'\not\equiv \Phi^\vee(\Theta)\pmod{p^c}$, i.e. $v_p(\mu'-\Phi^\vee(\Theta))<c$, there is a element $T_{\mu'}\in{\cal H}'_{\cal O}$ such that $$ \mu'(T_{\mu'})\not\equiv \Phi^\vee(\Theta)(T_{\mu'})\pmod{p^c}.\leqno(3') $$ We set $$ \xi=\prod_{\mu\in{\cal E}({\bf H})\atop \mu\not\equiv \Theta\pmod{p^c}} \Phi^\vee(\Theta)(T_\mu')-\Phi^\vee(\mu)(T_\mu')\, \prod_{\mu'\in{\cal E}({\bf H}')\atop \mu'\not\equiv \Phi^\vee(\Theta)\pmod{p^c}} \Phi^\vee(\Theta)(T_{\mu'})-\mu'(T_{\mu'}) \;(\in{\cal O}) $$
and we define $$ e(\Theta)=\frac{1}{\xi}\,\prod_{\mu\in{\cal E}({\bf H})\atop \mu\not\equiv \Theta\pmod{p^c}} T_\mu'-\Phi^\vee(\mu)(T_\mu')\, \prod_{\mu'\in{\cal E}({\bf H}')\atop \mu'\not\equiv \Phi^\vee(\Theta)\pmod{p^c}}T_{\mu'}-\mu'(T_{\mu'})\;\in\frac{1}{\xi}{\cal H}_{\cal O}'. $$
Equations $(3)$ and $(3')$ imply that $$
v_p(\xi)\le |{\cal E}({\bf H})|c+|{\cal E}({\bf H}')|c\le Mc\le Mm. $$ which in particular implies the first claim of the Lemma.
c.) We compute the trace of $e(\Theta)$ on ${\bf H}'$. We write $$ {\bf H}'=\bigoplus_{\gamma'\in{\cal E}({\bf H}')} {\bf H}'(\gamma'). $$ Since ${\cal H}'$ is commutative, there is for any $\gamma'\in{\cal E}({\bf H}')$ a basis ${\cal B}(\gamma')$ of ${\bf H}'(\gamma')$ such that all $T'\in{\cal H}'$ are represented on ${\bf H}'(\gamma')$ by an upper triangular matrix: $$ {\cal D}_{{\cal B}(\gamma')}(T')=\left(\begin{array}{ccc}\gamma'(T') &&*\\&\ddots&\\&&\gamma'(T') \end{array} \right). $$ Then $e(\Theta)$ is represented on ${\bf H}'(\gamma')$ by the matrix $$ {\cal D}_{{\cal B}(\gamma')}(e(\Theta))=\left(\begin{array}{ccc}x &&*\\&\ddots&\\&&x' \end{array} \right), $$ where $$ x=x(\gamma')=\frac{1}{\xi}\,\prod_{\mu\in{\cal E}({\bf H})\atop \mu\not\equiv \Theta\pmod{p^c}} \gamma'(T_\mu')-\Phi^\vee(\mu)(T_\mu')\, \prod_{\mu'\in{\cal E}({\bf H}')\atop \mu'\not\equiv \Phi^\vee(\Theta)\pmod{p^c}}\gamma'(T_{\mu'})-\mu'(T_{\mu'}) \in\frac{1}{\xi}{\cal O}. $$
Using this we determine the trace of $e(\Theta)$ on ${\bf H}'(\gamma')$, $\gamma'\in{\cal E}({\bf H}')$, as follows.
If $\gamma'\not\equiv\Phi^\vee(\Theta)\pmod{p^c}$ then $x=0$, hence, ${\rm tr}\,(e(\Theta)|{{\bf H}'(\gamma')})=0$.
If $\gamma'\equiv \Phi^\vee(\Theta)\pmod{p^c}$ then we write for any $\mu\in{\cal E}({\bf H})$, $\mu\not\equiv\Theta\pmod{p^c}$ $$ \gamma'(T_\mu')=\Phi^\vee(\Theta)(T_\mu')+\delta_\mu', $$ where $\delta_\mu'\in{\cal O}$.
Since $v_p(\gamma'-\Phi^\vee(\Theta))\ge c$, equation (2') implies that $v_p(\gamma'-\Phi^\vee(\Theta))>c+l$, hence, $$ v_p(\delta_\mu')> c+l.\leqno(4) $$ Similarly, we write $$ \gamma'(T_{\mu'})=\Phi^\vee(\Theta)(T_{\mu'})+\delta_{\mu'} $$ and find $$ v_p(\delta_{\mu'})>c+l.\leqno(4') $$ Recalling the definition of $\xi$ we obtain \begin{eqnarray*} x&=&\frac{\prod_{\mu\in{\cal E}({\bf H})\atop \mu\not\equiv \Theta\pmod{p^c}} \Phi^\vee(\Theta)(T_\mu')-\Phi^\vee(\mu)(T_\mu')+\delta_\mu'}{\prod_{\mu\in{\cal E}({\bf H})\atop \mu\not\equiv \Theta\pmod{p^c}} \Phi^\vee(\Theta)(T_\mu')-\Phi^\vee(\mu)(T_\mu')}\, \frac{\prod_{\mu'\in{\cal E}({\bf H}')\atop \mu'\not\equiv \Phi^\vee(\Theta)\pmod{p^c}}\Phi^\vee(\Theta)(T_{\mu'})-\mu'(T_{\mu'})+\delta_{\mu'}}{\prod_{\mu'\in{\cal E}({\bf H}')\atop \mu'\not\equiv \Phi^\vee(\Theta)\pmod{p^c}}\Phi^\vee(\Theta)(T_{\mu'})-\mu'(T_{\mu'})}\\
\end{eqnarray*}
Since $v_p(\Phi^\vee(\Theta)(T_\mu')-\Phi^\vee(\mu)(T_\mu'))<c$ for all $\mu\not\equiv\Theta\pmod{p^c}$ (cf. equation(3)) and since, $v_p(\delta_\mu')>c+l$ (cf. equation (4)) we obtain that the first factor on the right hand side of the above equation for $x$ is congruent to $1$ modulo $p^l$. Similarly, using equations (3') and (4') we find that the second factor on the right hand side of the above equation for $x$ is congruent to $1$ modulo $p^l$. Thus, we obtain $x\equiv 1\pmod{p^l}$, hence, $$
{\rm tr}(e(\Theta)|{\bf H}'(\gamma'))\equiv{\rm dim}\,{\bf H}'(\gamma')\pmod{p^l}. $$ Summing over all $\gamma'\in{\cal E}({\bf H}')$ finally yields $$
{\rm tr}\,(e(\Theta))|{\bf H}')\equiv m_{\bf H}'(\Phi^\vee(\Theta),c)\pmod{p^l}. $$
d.) Quite analogous we compute the trace of $\Phi(e(\Theta))$ on ${\bf H}$. We note that $$ \Phi(e(\Theta))=\frac{1}{\xi}\,\prod_{\mu\in{\cal E}({\bf H})\atop \mu\not\equiv \Theta\pmod{p^c}} \Phi(T_\mu')-\Phi^\vee(\mu)(T_\mu')\, \prod_{\mu'\in{\cal E}({\bf H}')\atop \mu'\not\equiv \Phi^\vee(\Theta)\pmod{p^c}}\Phi(T_{\mu'})-\mu'(T_{\mu'})\;\in\frac{1}{\xi}{\cal H}_{\cal O}. $$ Let $\gamma\in{\cal E}({\bf H})$. We choose a basis ${\cal B}(\gamma)$ of ${\bf H}(\gamma)$ such that any $T\in{\cal H}$ is upper triangular on ${\bf H}(\gamma)$: $$ {\cal D}_{{\cal B}(\gamma)}(T)=\left(\begin{array}{ccc}\gamma(T) &&*\\&\ddots&\\&&\gamma(T) \end{array} \right). $$ Then $\Phi(e(\Theta))$ is represented on ${\bf H}(\gamma)$ by the matrix $$ {\cal D}_{{\cal B}(\gamma)}(\Phi(e(\Theta)))=\left(\begin{array}{ccc}x &&*\\&\ddots&\\&&x \end{array} \right), $$ where $$ x=x(\gamma)=\frac{1}{\xi}\,\prod_{\mu\in{\cal E}({\bf H})\atop \mu\not\equiv \Theta\pmod{p^c}} \Phi^\vee(\gamma)(T_\mu')-\Phi^\vee(\mu)(T_\mu')\, \prod_{\mu'\in{\cal E}({\bf H}')\atop \mu'\not\equiv \Phi^\vee(\Theta)\pmod{p^c}}\Phi^\vee(\gamma)(T_{\mu'})-\mu'(T_{\mu'}) \in\frac{1}{\xi}{\cal O}. $$
If $\gamma\not\equiv\Theta\pmod{p^c}$ then $x=0$, hence, ${\rm tr}\,(\Phi(e(\Theta))|{{\bf H}(\gamma)})=0$.
If $\gamma\equiv \Theta\pmod{p^c}$ then equation (2) implies that $v_p(\gamma-\Theta)>c+l$. We write $\gamma(\Phi(T_\mu'))=\Theta(\Phi(T_\mu'))+\delta_\mu'$, or , equivalently, $$ \Phi^\vee(\gamma)(T_\mu')=\Phi^\vee(\Theta)(T_\mu')+\delta_\mu', $$ where $$ v_p(\delta_\mu')> c+l.\leqno(5) $$ Similarly, we write $$ \Phi^\vee(\gamma)(T_{\mu'})=\Phi^\vee(\Theta)(T_{\mu'})+\delta_{\mu'}, $$ where $$ v_p(\delta_{\mu'})> c+l.\leqno(5') $$ Recalling the definition of $\xi$ we obtain \begin{eqnarray*} x&=&\frac{\prod_{\mu\in{\cal E}({\bf H})\atop \mu\not\equiv \Theta\pmod{p^c}} \Phi^\vee(\Theta)(T_\mu')-\Phi^\vee(\mu)(T_\mu')+\delta_\mu'}{\prod_{\mu\in{\cal E}({\bf H})\atop \mu\not\equiv \Theta\pmod{p^c}} \Phi^\vee(\Theta)(T_\mu')-\Phi^\vee(\mu)(T_\mu')}\, \frac{\prod_{\mu'\in{\cal E}({\bf H}')\atop \mu'\not\equiv \Phi^\vee(\Theta)\pmod{p^c}}\Phi^\vee(\Theta)(T_{\mu'})-\mu'(T_{\mu'})+\delta_{\mu'}}{\prod_{\mu'\in{\cal E}({\bf H}')\atop \mu'\not\equiv \Phi^\vee(\Theta)\pmod{p^c}}\Phi^\vee(\Theta)(T_{\mu'})-\mu'(T_{\mu'}).}\\
\end{eqnarray*}
As above, using equations (3), (3'), (5), (5') we obtain $x\equiv 1\pmod{p^l}$, hence, $$
{\rm tr}(\Phi(e(\Theta))|{\bf H}(\gamma))\equiv{\rm dim}\,{\bf H}(\gamma)\pmod{p^l}. $$ Summing over all $\gamma\in{\cal E}({\bf H})$ finally yields $$
{\rm tr}\,(\Phi(e(\Theta))|{\bf H})\equiv m_{\bf H}(\Theta,c)\pmod{p^l} $$ Hence, the Lemma is proven.
{\it Remark. } 1.) Since $m_{\bf H}(\Theta,c)$ and $m_{{\bf H}'}(\Phi^\vee(\Theta),c)$ are smaller than or equal to $M$ the congruences in (2.3) Lemma uniquely determine $m_{\bf H}(\Theta,c)$ and $m_{{\bf H}'}(\Phi^\vee(\Theta),c)$.
2.) Since $c\ge m-(M+\frac{3}{2})\log_p M$, by increasing $m$ we can search for eigencharacters which are arbitrarily close to $\Phi^ \vee(\Theta)$ or $\Theta$; in doing so the denominators of $e(\Theta)$ will not grow unreasonably fast because $v_p(\xi)\le Mm$.
3.) The proof is not constructive, e.g. we do not obtain the value of $c$; in particular, we do not know whether we can choose $c=m-(M+\frac{3}{2})\log_p M$.
{\bf (2.4) } Using (2.3) Lemma we can give the following criterion for the existence of the ``mod $p^c$ transfer'' $\Psi^{[c]}$ which makes it possible to establish ``mod $p^c$ tansfer'' {via} a comparison of trace formulas.
{\bf Theorem. } {\it Assume that ${\rm dim}\,{\bf H},\,{\rm dim}\,{\bf H}' \le \frac{1}{2}M$ for some $M\in 2{\Bbb N}$. Assume that there is a rational number $s$ such that $$
{\rm tr}\,(\Phi(T')|{\bf H})\equiv {\rm tr}\,(T'|{\bf H}')\pmod{p^s} $$ for all $T'\in{\cal H}_{\cal O}'$. Then, for any $\Theta\in\hat{\cal H}_{\cal O}$ there is a rational number $c=c(\Theta)\ge \frac{s}{M}-(M+2)\log_p M$ such that $$ m_{\bf H}(\Theta,c)= m_{{\bf H}'}(\Phi^\vee(\Theta),c). $$ Hence, for any $\Theta\in{\cal E}({\bf H})$ there is an element $\Theta'\in{\cal E}({\bf H}')$ such that $$ \Theta'\equiv \Phi^\vee(\Theta)\pmod{p^{\frac{s}{M}-(M+2)\log_p M}}, $$ or, equivalently, there is a map on eigencharacters $$ \Psi^{[c]}:\,{\cal E}({\bf H})\rightarrow{\cal E}({\bf H}') $$ such that $$ \Psi^{[c]}(\Theta)\equiv \Phi^\vee(\Theta)\pmod{p^{\frac{s}{M}-(M+2)\log_p M}} $$ for all $\Theta\in{\cal E}({\bf H})$.
}
{\it Proof. } We set $m=\frac{s-\log_p M}{M}$. Let $\Theta\in\hat{\cal H}_{\cal O}$. According to (2.3) Lemma there is an element $e(\Theta)\in\frac{1}{\xi}{\cal H}'_{\cal O}$, where $v_p(\xi)\le Mm=s-\log_p M$, and a rational number $$ c=c(\Theta)\ge m-(M+\frac{3}{2})\log_pM=\frac{s}{M}-(M+2)\log_p M $$
(note that $\frac{1}{M}\le \frac{1}{2}$) such that ${\rm tr}\,(\Phi(e(\Theta))|{\bf H})\equiv m_{\bf H}(\Theta,c)$ and ${\rm tr}\,(e(\Theta)|{\bf H}')\equiv m_{{\bf H}'}(\Phi^\vee(\Theta),c)\pmod{p^{\log_p M}}$. Since $v_p(\xi)\le Mm=s-\log_p M$, the assumption of the Theorem implies $$
{\rm tr}\,(\Phi(e(\Theta))|{\bf H})\equiv{\rm tr}\,(e(\Theta)|{\bf H}')\pmod{p^{\log_p M}}, $$ hence, $$ m_{\bf H}(\Theta,c)\equiv m_{{\bf H}'}(\Phi^\vee(\Theta),c)\pmod{p^{\log_p M}}. $$ Since $m_{\bf H}(\Theta,c)$ and $m_{{\bf H}'}(\Phi^\vee(\Theta),c)$ are natural numbers which are smaller than $\frac{1}{2}M\le p^{\log_p M}$ this implies $$ m_{\bf H}(\Theta,c)= m_{{\bf H}'}(\Phi^\vee(\Theta),c). $$ In particular, for any $\Theta\in{\cal E}({\bf H})$ there is an element $\Theta'\in{\cal E}({\bf H}')$ such that $\Theta'\equiv \Phi^\vee(\Theta)\pmod{p^c}$. Hence, the proof is complete.
{\bf (2.5) Corollary. }{\it Assume that ${\rm dim}\,{\bf H},\,{\rm dim}\,{\bf H}' < \frac{1}{2}M$ for some $M\in 2{\Bbb N}$. Assume that there is a rational number $s$ such that $$
{\rm tr}\,(T|{\bf H})\equiv {\rm tr}\,(T|{\bf H}')\pmod{p^s} $$ for all $T\in{\cal H}_{\cal O}$. Then, for any $\Theta\in\hat{\cal H}_{\cal O}$ there is a rational number $c\ge \frac{s}{M}-(M+2)\log_p M$ such that $$ m_{\bf H}(\Theta,c)= m_{{\bf H}'}(\Theta,c). $$ Hence, for any $\Theta\in{\cal E}({\bf H})$ there is an eigencharacter $\Theta'\in{\cal E}({\bf H}')$ such that $$ \Theta'\equiv \Theta\pmod{p^{\frac{s}{M}-(M+2)\log_p M}}. $$ }
{\it Proof. } This is the special case ${\cal H}={\cal H}'$ and $\Phi={\rm id}$.
\section{Systems of higher congruences in the finite slope case}
Combining (1.9) Proposition with (2.5) Corollary we would obtain a trace criterion for the existence of $p$-adic continuous families of eigencharacters passing through a given eigencharacter. However, in this section we will construct certain elements which behave like an approximate idempotent attached to the slope $\alpha$ subspace of a ${\cal H}$-module ${\bf H}$; using this we then will obtain a criterion for the existence of finite slope $p$-adic continuous families of eigencharacters.
{\bf (3.1) Slope subspaces. } As in (1.2) we set ${\cal H}=\bar{\Bbb Q}_p[T_\ell,\,\ell\in I]$ and ${\cal H}_{\cal O}={\cal O}[T_\ell,\,\ell\in I]$ (countably generated). We fix an element $T\in{\cal H}_{{\cal O}}$. For any finite dimensional ${\cal H}$-module ${\bf H}$ which is defined over ${\cal O}$ with respect to the lattice ${\bf H}_{\cal O}$ we denote by ${\bf H}^\alpha$, $\alpha\in{\Bbb Q}_{\ge 0}$, the slope $\alpha$ subspace of ${\bf H}$ with respect to the operator $T$, i.e. $$ {\bf H}^\alpha=\bigoplus_{\mu\in{\cal O}\atop v_p(\mu)=\alpha} {\bf H}(\mu) $$ where ${\bf H}(\mu)$ is the generalized eigenspace attached to $T$ and $\mu$ and we set $$ {\bf H}^{\le\alpha}=\bigoplus_{\beta\le\alpha} {\bf H}^\beta. $$ We denote by $\Phi_{\bf H}\subset {\cal O}$ the set of eigenvalues of $T$ acting on ${\bf H}$ and by $\Phi_{\bf H}^{\le\alpha}\subseteq \Phi_{\bf H}$ the subset of all eigenvalues $\gamma$ of $T$ satisfying $v_p(\gamma)\le \alpha$. Hence, ${\bf H}^{\le\alpha}=\bigoplus_{\gamma\in \Phi_{\bf H}^{\le\alpha}}{\bf H}(\gamma)$.
{\bf (3.2) Construction of approximate idempotents. } Let ${\bf H},{\bf H}'$ be arbitrary ${\cal H}$-modules, finite dimensional and defined over ${\cal O}$ and select a slope $\alpha\in{\Bbb Q}_{\ge 0}$.
For any $\gamma\in\Phi_{\bf H}$ we choose a basis ${\cal B}={\cal B}(\gamma)$ of ${\bf H}(\gamma)$ such that the representing matrix
${\cal D}_{\cal B}(T|_{{\bf H}(\gamma)})$ of $T$ on ${\bf H}(\gamma)$ is upper triangular: $$
{\cal D}_{\cal B}(T|_{{\bf H}(\gamma)})= \left( \begin{array}{ccc} \gamma&&*\\ &\ddots&\\ &&\gamma\\ \end{array} \right).\leqno(1) $$
Similarly, for any $\gamma\in\Phi_{{\bf H}'}$ we choose a basis ${\cal B}'={\cal B}'(\gamma)$ of ${\bf H}'(\gamma)$ such that the matrix representing $T$ on ${\bf H}'(\gamma)$ is upper triangular: $$
{\cal D}_{{\cal B}'}(T|_{{\bf H}'(\gamma)})= \left( \begin{array}{ccc} \gamma&&*\\ &\ddots&\\ &&\gamma\\ \end{array} \right). $$
We define the element $$ {\mathbf e}^{\le\alpha}={\mathbf e}_{{\bf H},{\bf H}'}^{\le\alpha}=1-\prod_{\mu\in \Phi_{\bf H}^{\le\alpha}\cup\Phi_{{\bf H}'}^{\le\alpha}}\frac{T-\mu}{-\mu}\in\bar{\Bbb Q}_p[T]. $$ Clearly, ${\mathbf e}^{\le\alpha}=p^{\le\alpha}(T)$, where the polynomial $p^{\le\alpha}(X)$ is given by $$ p^{\le\alpha}=p_{{\bf H},{\bf H}'}^{\le\alpha}=1-\prod_{\mu\in \Phi_{\bf H}^{\le\alpha}\cup\Phi_{{\bf H}'}^{\le\alpha}}\frac{X-\mu}{-\mu}\in \bar{\Bbb Q}_p[X]. $$ We want to collect some properties of ${\mathbf e}^{\le\alpha}$ and $p^{\le\alpha}$. To this end we let $M(\alpha)\in 2{\Bbb N}$, $\alpha\in{\Bbb Q}_{\ge 0}$, be a collection of natural numbers such that $$ {\rm dim}\,{\bf H}^{\le\alpha}\le \frac{1}{2}M(\alpha)\quad\mbox{and}\quad{\rm dim}\,{\bf H}'^{\le \alpha}\le \frac{1}{2}M(\alpha)\leqno(2) $$ for all $\alpha\in{\Bbb Q}_{\ge 0}$. For an arbitrary polynomial $p=\sum_{i\ge 0}a_iX^i\in\bar{\Bbb Q}_p[X]$ we define its slope as $$ {\bf S}(p)={\rm sup}\,\{s\in{\Bbb Q}\cup\{-\infty\}: v_p(a_i)\ge si\,\mbox{for all $i\ge 0$}\}, $$ hence, ${\bf S}(p)>-\infty$ implies $v_p(a_0)\ge 0$. Easy calculation shows that $$ {\bf S}(pq)\ge {\rm min}\{{\bf S}(p),{\bf S}(q)\}\leqno(3a) $$ and $$ {\bf S}(p+q)\ge {\rm min}\{{\bf S}(p),{\bf S}(q)\}.\leqno(3b) $$
{\bf (3.3) Lemma. }{\it 1.) For any $\gamma\in\Phi_{\bf H}- \Phi_{\bf H}^{\le \alpha}$ we have $$
{\cal D}_{\cal B}({\mathbf e}_{{\bf H},{\bf H}'}^{\le\alpha}|_{{\bf H}(\gamma)})= \left( \begin{array}{ccc} \zeta&&*\\ &\ddots&\\ &&\zeta\\ \end{array} \right) $$ where $\zeta\in{\cal O}$ satisfies $v_p({\zeta})\ge \frac{2}{M(\alpha+1)}$. The analogous statement holds for $\gamma\in \Phi_{{\bf H}'}- \Phi_{{\bf H}'}^{\le \alpha}$.
2.) For any $\gamma\in\Phi_{\bf H}^{\le \alpha}$ we have $$
{\cal D}_{\cal B}({\mathbf e}_{{\bf H},{\bf H}'}^{\le\alpha}|_{{\bf H}(\gamma)})= \left( \begin{array}{ccc} 1&&*\\ &\ddots&\\ &&1\\ \end{array} \right). $$ Again, the analogous statement holds for $\gamma\in \Phi_{{\bf H}'}^{\le \alpha}$.
3.) $$
{\rm deg}\,p_{{\bf H},{\bf H}'}^{\le\alpha}=|\Phi_{\bf H}^{\le\alpha}|+|\Phi_{{\bf H}'}^{\le\alpha}|\le {\rm dim}\, {\bf H}^{\le\alpha}+{\rm dim}\,{{\bf H}'}^{\le\alpha}\le M(\alpha). $$
4.) $$ p_{{\bf H},{\bf H}'}^{\le\alpha}(0)=0 $$
5.) $${\bf S}(p_{{\bf H},{\bf H}'}^{\le\alpha})\ge -\alpha. $$ In particular, $(p^{\le\alpha})^L=\sum_{h\ge L} b_hX^h$, where $v_p(b_h)\ge -h\alpha$ for all $h\ge L$.
}
{\it Proof. } 1.) Let $\gamma\in\Phi_{\bf H}-\Phi_{\bf H}^{\le \alpha}$. Equation (1) implies that with respect to ${\cal B}={\cal B}(\gamma)$ $$
{\cal D}_{\cal B}({\mathbf e}^{\le\alpha}|_{{\bf H}(\gamma)})= \left( \begin{array}{ccc} \zeta&&*\\ &\ddots&\\ &&\zeta\\ \end{array} \right) $$ where $$ \zeta=1-\prod_{\mu\in \Phi_{\bf H}^{\le\alpha}\cup\Phi_{{\bf H}'}^{\le\alpha}} \frac{\gamma-\mu}{-\mu}. $$ Let $\mu\in\Phi_{\bf H}^{\le\alpha}\cup \Phi_{{\bf H}'}^{\le\alpha}$ be arbitrary. We distinguish two cases. First, if $\gamma\not\in\Phi_{\bf H}^{\le\alpha+1}$, i.e. $v_p(\gamma)>\alpha+1$, then we obtain $v_p(\gamma)>v_p(\mu)+1\ge v_p(\mu)+2/M(\alpha+1)$.
Second, if $\gamma\in\Phi_{\bf H}^{\le\alpha+1}$, then $\gamma$ is an eigenvalue of $T$ acting on ${\bf H}^{\le\alpha+1}$, hence, it is a root of the characteristic polynomial of $T$ acting on ${\bf H}^{\le\alpha+1}$
which has degree ${\dim}\,{\bf H}^{\le\alpha+1}\le \frac{1}{2} M(\alpha+1)$. We deduce that $\gamma$ is contained in an extension of ${\Bbb Q}_p$ of degree less than or equal to $\frac{1}{2} M(\alpha+1)$ which implies that $v_p(\gamma)\in\frac{2}{M(\alpha+1)}{\Bbb N}_0$.
On the other hand, since $\mu\in\Phi_{\bf H}^{\le\alpha+1}\cup \Phi_{{\bf H}'}^{\le\alpha+1}$ we obtain quite similarly that $v_p(\mu)\in\frac{2}{M(\alpha+1)}{\Bbb N}_0$. Since $\gamma\not\in\Phi_{\bf H}^{\le\alpha}$ we know that $v_p(\gamma)>v_p(\mu)$, hence, $v_p(\gamma)\ge v_p(\mu)+2/M(\alpha+1)$. Thus, in both cases we find $v_p(\frac{\gamma}{\mu})\ge 2/M(\alpha+1)$. Since $$ \zeta=1-\prod_{\mu\in \Phi_{\bf H}^{\le\alpha}\cup\Phi_{{\bf H}'}^{\le\alpha}} 1-\frac{\gamma}{\mu} $$ is a sum of products of the form $\pm\prod_\mu \frac{\gamma}{\mu}$, where $\mu$ runs over a non-empty subset of $\Phi_{\bf H}^{\le\alpha}\cup\Phi_{{\bf H}'}^{\le\alpha}$ (the summand "$1$" cancels), we deduce that $v_p(\zeta)\ge 2/M(\alpha+1)$.
2.) Immediate by the definition of ${\bf e}^{\le\alpha}$.
3.) and 4.) Clear
5.) By definition it is immediate that ${\bf S}(\frac{T-\mu}{-\mu})={\bf S}(\frac{T}{-\mu}+1)= -v_p(\mu)$. All $\mu$ appearing in the definition of ${\bf e}^{\le\alpha}$ satisfy $v_p(\mu)\le\alpha$; hence, using equation (3a) and (3b) we deduce ${\bf S}(p^{\le \alpha})\ge {\rm min}\{0,-\alpha\}=-\alpha$. The second statement follows because $X$ divides $p^{\le\alpha}$ and ${\bf S}((p^{\le\alpha})^L)\ge{\bf S}(p^{\le\alpha})$. This finishes the proof of the Lemma.
{\bf (3.4) Proposition. }{\it For any pair of finite dimensional ${\cal H}$-modules ${\bf H}$, ${\bf H}'$ which are defined over ${\cal O}$ and satisfy equation (2) and for any $T\in{\cal H}_{\cal O}$ we have $$
{\rm tr}\,(T ({{\bf e}_{{\bf H},{\bf H}'}^{\le\alpha}})^L|{\bf H})\equiv {\rm tr}\,(T|{{\bf H}^{\le\alpha}})\pmod{p^{\frac{2L}{M(\alpha+1)}}}. $$ The same congruence holds for ${\bf H}'$ in place of ${\bf H}$. }
{\it Proof. } Let $\gamma\in\Phi_{\bf H}$ and let ${\cal B}$ be a basis of ${\bf H}(\gamma)$ such that equation (1) holds. (3.3) Lemma implies that $T({\bf e}^{\le\alpha})^L$ is represented on ${\bf H}(\gamma)$ by the matrix $$ \left( \begin{array}{ccc} \gamma\zeta^L&&*\\ &\ddots&\\ &&\gamma\zeta^L\\ \end{array} \right), $$ where $\zeta\equiv 0\pmod{\frac{2}{M(\alpha+1)}}$ if $v_p(\gamma)>\alpha$ and $\zeta=1$ if $v_p(\gamma)\le \alpha$. Since $v_p(\gamma)\ge 0$ ($T\in{\cal H}_{\cal O}$) this implies the claim. The same argument works if we replace ${\bf H}$ by ${\bf H}'$, hence the proof is complete.
{\bf (3.5) Remark. } In particular, we obtain $$
\lim_{L\rightarrow\infty} \,{\rm tr}\,(T ({{\bf e}^{\le\alpha}})^L|{\bf H})= {\rm tr}\,(T|{{\bf H}^{\le\alpha}}) $$ and the same holds if we replace ${\bf H}$ by ${\bf H}'$. Thus, ${\bf e}^{\le\alpha}$ behaves like an approximate idempotent attached to the slope $\le\alpha$-subspaces of ${\bf H}$ and ${\bf H}'$. On the other hand, ${\bf e}^{\le\alpha}$ is not universal, i.e. it not only depends on $\alpha$ but also on the pair of modules ${\bf H}$ and ${\bf H}'$.
{\bf (3.6) A criterion for the existence of $p$-adic continuous families of finite slope. } We give the synthesis of our results obtained so far. We use the notations from (1.8), i.e. ${\bf G}/{\Bbb Q}$ is a reductive algebraic group with maximal split torus ${\bf T}$. Let ${\cal R}\subseteq X({\bf T})$ be a subset and let $({\bf H}_\lambda)$, $\lambda\in{\cal R}$, be a family of finite dimensional ${\cal H}$-modules. From now on we will always assume that the following assumptions hold for the family $({\bf H}_\lambda)$:
\begin{enumerate}
\item Any ${\bf H}_\lambda$ is defined over ${\cal O}$
\item There are numbers $M(\alpha)\in 2{\Bbb N}$, $\alpha\in{\Bbb Q}_{\ge 0}$, such that $$ {\rm dim}\,{\bf H}^{\le\alpha}_\lambda\le \frac{1}{2}M(\alpha) \quad\mbox{for all $\alpha\in{\Bbb Q}_{\ge 0}$ and all $\lambda\in {\cal R}$}. $$
\end{enumerate}
We select a slope $\alpha\in{\Bbb Q}_{\ge 0}$ and we put ${\bf e}_{\lambda,\lambda'}={\bf e}_{{\bf H}_\lambda,{\bf H}_{\lambda'}}^{\le \alpha}$.
{\bf (3.7) Proposition. } {\it Let $({\bf H}_\lambda)_{\lambda\in{\cal R}}$ be a family of ${\cal H}$-modules. Assume that there is a collection of rational numbers $a'=a'(\alpha),a=a(\alpha)\in{\Bbb Q}_{>0}$ and $b=b(\alpha)\in{\Bbb Q}_{\le 0}$ ($\alpha\in{\Bbb Q}_{\ge 0}$) with the following property: i) $a'/M(\alpha+1)$, $a$ and $b$ are decreasing in $\alpha$ ii) for any $\alpha\in{\Bbb Q}_{\ge 0}$ and any pair $\lambda,\lambda'\in X({\bf T})$ with $\lambda\equiv \lambda'\pmod{(p-1)p^m X({\bf T})}$ there is a natural number $L\ge a'(m+1)$ such that $$
{\rm tr}\,({\bf e}_{\lambda,\lambda'}^L T|{\bf H}_\lambda)\equiv {\rm tr}\,({\bf e}_{\lambda,\lambda'}^L T|{\bf H}_{\lambda'})\pmod{p^{a(m+1)+b}}\leqno(\dag) $$ for all $T\in{\cal H}_{\cal O}$. Then
1.) ${\rm dim}\,{\bf H}_\lambda^\alpha$ is locally constant as a function of $\lambda$, i.e. there is $D=D(\alpha)\in{\Bbb N}$ only depending on $\alpha$ such that $\lambda\equiv\lambda'\pmod{(p-1)p^D}$ implies ${\rm dim}\,{\bf H}_\lambda^\alpha={\rm dim}\,{\bf H}_{\lambda'}^\alpha$.
2.) Any $\Theta\in{\cal E}({\bf H}_{\lambda_0}^{\alpha})$ fits in a $p$-adic continuous family of eigencharacters of slope $\alpha$, i.e. there is a family $(\Theta_\lambda)_{\lambda\in{\cal R}}$ such that
\begin{enumerate}
\item $\Theta_\lambda\in{\cal E}({\bf H}_\lambda^{\alpha})$
\item $\Theta_{\lambda_0}=\Theta$
\item $\lambda\equiv\lambda'\pmod{(p-1)p^m X({\bf T})}$ implies $$ \Theta_\lambda\equiv\Theta_{\lambda'}\pmod{p^{{\sf a}(m+1)+{\sf b}}}, $$ where ${\sf a}=\frac{1}{M(\alpha)}{\rm min}\,(a,\frac{2 a'}{M(\alpha+1)})$ and ${\sf b}=\frac{b}{M(\alpha)}-(M(\alpha)+2)\log_p (M(\alpha))$.
\end{enumerate}
}
{\it Proof. } We select $\alpha\in{\Bbb Q}_{\ge 0}$ and we proceed in steps.
a.) We let $\lambda\equiv\lambda'\pmod{(p-1)p^m X({\bf T})}$. We choose $L\ge a'(m+1)$ such that equation ($\dag$) holds. Together with (3.4) Proposition we obtain for all $T\in{\cal H}_{\cal O}$ $$
{\rm tr}\,(T|{\bf H}_\lambda^{\le\alpha})\equiv{\rm tr}\,({\bf e}_{\lambda,\lambda'}^LT|{\bf H}_\lambda)
\equiv {\rm tr}\,({\bf e}_{\lambda,\lambda'}^LT|{\bf H}_{\lambda'})
\equiv {\rm tr}\,(T|{\bf H}_{\lambda'}^{\le\alpha})\pmod{p^s},\leqno(4) $$ where $s=\bar{a}(m+1)+b$ with $\bar{a}={\rm min}\,(\frac{2 a'}{M(\alpha+1)},a)$ (note that $b\le 0$). We note that our assumptions imply that $s$ is decreasing in $\alpha$.
b.) We next show that ${\rm dim}\,{\bf H}_\lambda^{\le\alpha}$ is locally constant in $\lambda$. To this end we set $D=D(\alpha) =\frac{\log_p M(\alpha)-b}{\bar{a}}-1$ and we let $\lambda\equiv \lambda'\pmod{(p-1)p^D X({\bf T})}$. Since $$
{\rm dim}\,{\bf H}_\lambda^{\le\alpha}={\rm tr}\,({\bf 1}|{{\bf H}_\lambda^{\le\alpha}})
\equiv{\rm tr}\,({\bf 1}|{{\bf H}_{\lambda'}^{\le\alpha}})={\rm dim}\,{\bf H}_{\lambda'}^{\le\alpha} \pmod{p^{\bar{a}(D+1)+b}}. $$ and since $p^{\bar{a}(D+1)+b}= M(\alpha)> {\rm dim}\, {\bf H}_\lambda^{\le\alpha}$, ${\rm dim}\, {\bf H}_{\lambda'}^{\le\alpha}$ we obtain ${\rm dim}\,{\bf H}_\lambda^{\le\alpha}={\rm dim}\,{\bf H}_{\lambda'}^{\le\alpha}$. Since $D(\alpha)$ is increasing in $\alpha$ the congruence $\lambda\equiv \lambda'\pmod{(p-1)p^D X({\bf T})}$ even implies ${\rm dim}\,{\bf H}_\lambda^{\le\beta}={\rm dim}\,{\bf H}_{\lambda'}^{\le\beta}$ for all $\beta\le \alpha$.
c.) Let $\lambda_i\in X({\bf T})$ and let $0\le\alpha_1<\cdots<\alpha_s\le\alpha$ be the non-trivial slopes appearing in ${\bf H}_{\lambda_i}^{\le\alpha}$. Part b.) implies that for any $\lambda\in \lambda_i+p^{D(\alpha)} X({\bf T})$ the non trivial slopes appearing in ${\bf H}_\lambda^{\le\alpha}$ again are $\alpha_1<\cdots<\alpha_s$ (note that $D(\alpha)$ is increasing in $\alpha$).
Since $X({\bf T})\cong {\Bbb Z}^n$ is covered by finitely many cosets $\lambda_i+p^D X({\bf T})$, $i=1,\ldots,s$ we can select non negative rational numbers $0\le \alpha_1<\cdots<\alpha_s\le\alpha$ such
that for any $\lambda\in{\cal R}$ the inequality ${\bf H}_\lambda^\alpha\not=0$ implies that $\alpha$ is one of the $\alpha_i$.
d.) We denote by $\beta$ the largest of the numbers $\alpha_1<\cdots<\alpha_s$ which is strictly smaller than $\alpha$. We obtain ${\rm dim}\,{\bf H}_\lambda^{\alpha}={\rm dim}\,{\bf H}_\lambda^{\le\alpha}- {\rm dim}\,{\bf H}_\lambda^{\le\beta}$ for any $\lambda\in{\cal R}$. Part b.) then implies that ${\rm dim}\,{\bf H}_\lambda^{\alpha}={\rm dim}\,{\bf H}_{\lambda'}^{\alpha}$ if $\lambda\equiv\lambda'\pmod{(p-1)p^{D(\alpha)} X({\bf T})}$.
e.) Finally, since $s$ is decreasing in $\alpha$ equation (4) still holds if we replace "$\le\alpha$" by "$\le\beta$". Hence, by substracting we obtain $$
{\rm tr}\,(T|{\bf H}_\lambda^{\alpha})\equiv {\rm tr}\,(T|{\bf H}_{\lambda'}^{\alpha})\pmod{p^s}. $$ Thus, (2.5) Corollary implies that for all $\Theta\in\hat{\cal H}_{\cal O}$ $$ m_{{\bf H}_\lambda^{\alpha}}(\Theta,c)=m_{{\bf H}_{\lambda'}^{\alpha}}(\Theta,c) $$ for some $c=c(\Theta)\ge {\sf a}(m+1)+{\sf b}$ with ${\sf a}$ and ${\sf b}$ as in the Proposition. (1.9) Proposition now implies the claim. This completes the proof of the Proposition.
{\it Remark. } Since ${\bf H}_\lambda^{\le\beta}\subseteq{\bf H}_\lambda^{\le\alpha}$ if $\beta\le\alpha$ it is natural that on ${\bf H}_\lambda^{\le\alpha}$ weaker congruences for eigenvalues of operators hold, i.e. $a$ and $b$ are decreasing in $\alpha$. In particular, smaller power of ${\bf e}_{\lambda,\lambda'}$ should be sufficient, i.e. $a'/M(\alpha+1)$ also should be decreasing in $\alpha$.
\centerline{\bf \Large B. Cohomology of the Siegel upper half plane}
In this second part we consider an example: we will show that the family of cohomology groups of the Siegel upper half plane with coefficients in the irreducible representation of varying highest weight $\lambda$ satisfies equation $(\dag)$ in (3.7). As a consequence we obtain the existence of $p$-adic continuous families of Siegel eigenforms of finite slope $\alpha$. We start by fixing some notation.
\section{Notations}
{\bf (4.1) The symplectic group. } From now on we set ${\bf G}={\bf GSp}_{2n}$. Hence, for any ${\Bbb Z}$-algebra $K$ we have $$ {\bf G}(K)=\{g\in {\rm GL}_{2n}(K):\,g^t\Mat{}{I_n}{-I_n}{}g=\nu(g)\Mat{}{I_n}{-I_n}{}\;\mbox{for some $\nu(g)\in K^*$}\}. $$ The multiplier defines a character $\nu:\,{\bf G}\rightarrow{\Bbb G}_m$ and the derived group of ${\bf G}$ is the symplectic group ${\bf G}^0={\bf Sp}_{2n}$ which is the kernel of $\nu$, i.e. $$ {\bf G}^0(K)=\{g\in{\bf GSp}_{2n}(K):\,\nu(g)=1\}. $$ Thus, ${\bf G}^0(K)$ consists of all matrices $g=\Mat{A}{B}{C}{D}$ satisfying $$ A^tB=B^tA,\quad C^tD=D^tC,\quad A^tD-B^tC=1\leqno(1) $$
and is simply connected.
We set ${\bf Z}={\bf Z}_{\bf G}$, hence, ${\bf Z}(K)=\{\lambda\cdot I_{2n},\;\lambda\in K^*\}$ and we denote by ${\bf T}$ the maximal ${\Bbb Q}$-split torus in ${\bf G}$ whose $K$-points are given by $$ {\bf T}(K)=\{{\rm diag}(\alpha_1,\ldots,\alpha_n,\frac{\nu}{\alpha_1},\ldots,\frac{\nu}{\alpha_n}),\,\alpha_i\in K^*,\,\nu\in K^*\}.\leqno(2) $$
The intersection ${\bf T}^0={\bf T}\cap {\bf G}$ is a maximal split torus in ${\bf G}^0$ and $$ {\bf T}^0(K)=\{{\rm diag}(\alpha_1,\ldots,\alpha_n,\alpha_1^{-1},\ldots,\alpha_n^{-1}),\,\alpha_i\in K^*\}. $$ We denote by ${\mathfrak g}$ resp. ${\mathfrak g}^0$ resp. ${\mathfrak h}$ resp. ${\mathfrak h}^0$ the complexified Lie Algebra of ${\bf G}/{\Bbb Q}$ resp. ${\bf G}^0/{\Bbb Q}$ resp. ${\bf T}$ resp. ${\bf T }^0$. Thus, $$ {\mathfrak g}^0=\{\Mat{A}{B}{C}{D}\in M_{2n}({\Bbb C}):\,A=-D^t, B=B^t, C=C^t\} $$
and $$ {\mathfrak h}^0=\{{\rm diag}(a_1,\ldots,a_n,-a_1,\ldots,-a_n),\,a_i\in{\Bbb C}\}. $$ We denote by $\Phi=\Phi({\bf T}^0,{\mathfrak g}^0)$ the root system of ${\bf G}^0$ with respect to ${\bf T}^0$; explicitly, the roots $\alpha\in\Phi$, a generator $X_\alpha$ of the corresponding root space ${\mathfrak g}_\alpha\le{\mathfrak g}^0$ and the $1$-parameter subgroup $\exp tX_\alpha$ are given as follows: $$ \begin{array}{lclcc} \hline\\ \qquad \alpha&X_\alpha\qquad &\exp tX_\alpha&\\ \hline\\ \epsilon_i-\epsilon_j,\;1\le j<i\le n&E_{i,j}-E_{j+n,i+n}&1+t(E_{i,j}-E_{j+n,i+n})&{\rm positive}&\\ \epsilon_i-\epsilon_j,\;1\le i<j\le n&E_{i,j}-E_{j+n,i+n}&1+t(E_{i,j}-E_{j+n,i+n})&&\Phi_1\\ \epsilon_i+\epsilon_j,\;1\le i<j\le n&E_{i,n+j}-E_{j,n+i}&1+t(E_{i,n+j}+E_{j,n+i})&{\rm positive}&\\ -\epsilon_i-\epsilon_j,\;1\le i<j\le n&E_{n+i,j}-E_{n+j,i}&1+t(E_{n+i,j}+E_{n+j,i})&&\Phi_2\\ 2\epsilon_i,\;1\le i\le n&E_{i,n+i}&1+tE_{i,n+i}&{\rm positive}&\\ -2\epsilon_i,\;1\le i\le n&E_{n+i,i}&1+tE_{n+i,i}&&\Phi_2\\ \hline\\ \end{array} $$
Here, $\epsilon_i:\,{\bf T}^0\rightarrow{\Bbb C}$ is defined by mapping ${\rm diag}(\alpha_1,\ldots,\alpha_n,\alpha_1^{-1},\ldots,\alpha_n^{-1})$ to $\alpha_i$ ($1\le i\le n$) and $\exp:\,{\mathfrak g}^0\rightarrow {\bf G}({\Bbb C})$ is the exponential.
We choose as a basis of the root system $$ \Delta=\{2\epsilon_1,\epsilon_{i+1}-\epsilon_{i},\,i=1,\ldots,n-1\}. $$
We extend the roots $\alpha\in \Phi$ to ${\bf T}$ by setting them equal to $1$ on the center ${\bf Z}\le{\bf T}$.
{\bf (4.2) } We denote by $X({\bf T})$ the (additively written) group of (${\Bbb Q}$-)characters of ${\bf T}$. Since ${\bf T}$ is connected, $X({\bf T})$ is a finitely generated, free abelian group. Let $\gamma_1,\ldots,\gamma_{n+1}$ be a ${\Bbb Z}$-basis of $X({\bf T})$. For any $\lambda=\sum_{i=1}^{n+1} \lambda_i\gamma_i\in X({\bf T})$ ($\lambda_i\in{\Bbb Z}$) we set $$ v_p(\lambda)={\rm min}_i\,v_p(\lambda_i)={\rm max}\,\{m\in{\Bbb N}_0: \lambda\in p^m X({\bf T})\}. $$ Hence, $\lambda\equiv \lambda'\pmod{p^m X({\bf T})}$ is equivalent to $v_p(\lambda-\lambda')\ge m$ (compare section (1.8)).
{\bf (4.3) Irreducible Representations. } Let $\lambda\in X({\bf T})$ be a dominant character, i.e. the restriction $\lambda^\circ=\lambda|_{{\bf T}^0}$ is a dominant
character of ${\bf T}^0$ and $\lambda|_{\bf Z}$ is an (algebraic) character. We denote by $(\pi_\lambda,L_\lambda)$ the irreducible representation of ${\bf G}(\bar{\Bbb Q}_p)$ of highest weight $\lambda$ on a $\bar{\Bbb Q}_p$-vector space $L_\lambda$. The representation $L_\lambda$ is defined over ${\Bbb Z}$, i.e. there is a ${\Bbb Z}$-submodule $L_\lambda({\Bbb Z})$ in $L_\lambda$ such that $L_\lambda=L_\lambda({\Bbb Z})\otimes\bar{\Bbb Q}_p$ and which is stable under ${\bf G}({\Bbb Z})$. Thus, for any ${\Bbb Z}$-algebra $R$ we obtain a ${\bf G}(R)$-module $L_\lambda(R)=L_\lambda({\Bbb Z})\otimes R$.
\section{The Hecke algebra ${\cal H}$ (attached to ${\bf GSp}_{2n}$)}
We define the Hecke algebra ${\cal H}$ which we shall be using. Essentially ${\cal H}$ omits all Hecke operators at primes dividing the level and its local component at the prime $p$ is generated by one single Hecke operator $T_p$.
{\bf (5.1) The local level subgroup ${\cal I}$. } Let $g=(g_{ij})_{ij}\in{\bf G}(K)$ be arbitrary. We partition $g$ as $$ g=\Mat{\sf A}{\sf B}{\sf C}{\sf D}=\left(
\begin{array}{ccc|ccc} \delta_{1}&&A_-&b_1&&B\\ &\ddots&&&\ddots&\\ A_+&&\delta_{n}&B'&&b_n\\ \hline\\ c_1&&C&\delta_{n+1}&&A_+'\\ &\ddots&&&\ddots&\\ C'&&c_n&A_-'&&\delta_{2n}\\ \end{array} \right)\leqno(1) $$ where $A_+=(a_{ij}^+)$, $A_-=(a_{ij}^-)$, $B=(b_{ij})$, $B'=(b'_{ij})$ and so on (a prime indicates that the submatrix and its corresponding primed submatrix are connected via one of the relations in equation (1) in section 4; the positive root spaces "correspond to entries in $A_+,B,B'$ and $b_1,\ldots,b_n$"). We denote by ${\bf B}\le {\bf G}$ the minimal parabolic subgroup attached to the basis $-\Delta$ of $\Phi$ and we define $$ {\cal I}\le {\bf G}({\Bbb Z}_p) $$ as the subgroup consisting of all matrices $g$ such that $g\pmod{p{\Bbb Z}_p}$ is contained in ${\bf B}({\Bbb Z}_p/(p))$. Thus, $g\in{\bf G}({\Bbb Z}_p)$ as in equation (1) is contained in ${\cal I}$ precisely if $$ a^+_{ij}\in p{\Bbb Z}_p,\,b_{ij}\in p{\Bbb Z}_p,\,b_1,\ldots,b_n\in p{\Bbb Z}_p.\leqno(2) $$
{\bf (5.2) An auxiliary Lemma. } For any prime $\ell$ we set $$ {\bf T}({\Bbb Q}_\ell)^+=\{h\in {\bf T}({\Bbb Q}_\ell):\,v_\ell(\alpha(h))\ge 0\;\mbox{for all}\;\alpha\in\Delta\}. $$
{\bf Lemma. } {\it 1.) Let $s\in {\bf T}({\Bbb Q}_p)^+$ and set $$ {\cal V}_s=\{1+t_\alpha X_\alpha,\,\alpha\in\Phi^-,\,t_\alpha\in{\Bbb Z}_p/p^{-v_p(\alpha(s))}{\Bbb Z}_p\}\quad (\subseteq{\bf B}({\Bbb Z}_p)). $$
Then, ${\cal I}s{\cal I}=\bigcup_{v\in{\cal V}_h}{\cal I}sv$.
2.) Let $s_1,s_2\in {\bf T}({\Bbb Q}_p)^+$. Then $$ {\cal V}_{s_1s_2}={\rm Ad}(s_2^{-1})({\cal V}_{s_1}){\cal V}_{s_2}. $$ }
{\it Proof. } 1.) Since ${\cal I}s{\cal I}=\bigcup_{v} {\cal I}sv$ with $v$ running over $s^{-1}{\cal I}s\cap{\cal I}\backslash {\cal I}$, we have to show that ${\cal V}_s$ is a system of representatives for $s^{-1}{\cal I}s\cap{\cal I}\backslash {\cal I}$. To this end, we set $\alpha_{ij}=\epsilon_i-\epsilon_j$ and $\beta_{ij}=-\epsilon_i-\epsilon_j$, $1\le i<j\le n$ and $\Phi_1=\{\alpha_{ij},\,1\le i<j\le n\}$ and $\Phi_2=\{\beta_{ij},\,1\le i\le j\le n\}$; hence, $\Phi^-=\Phi_1\cup \Phi_2$. Since $s^{-1} xs={\rm Ad}(s^{-1})(x)$ and since ${\rm Ad}(s)$ acts on ${\mathfrak g}_\alpha$ via $\alpha$ we obtain that $s^{-1}{\cal I}s\cap{\cal I}$ consists of all matrices $g=(g_{ij})\in{\cal I}$ satisfying $$ g_{ij},g_{n+j,n+i}\in p^{-v_p(\alpha_{ij}(s))},\, 1\le i<j\le n\quad\mbox{and}\quad g_{i+n,j},g_{j+n,i}\in p^{-v_p(\beta_{ij}(s))},\,1\le i\le j\le n.\leqno(3) $$
Now let $g=(g_{ij})\in{\cal I}$ be arbitrary. We will perform two sets of elementary row operations to transform $g$ into the unit matrix.
a.) Multiplying $g$ from the left by matrices of the form $1+t(E_{ij}-E_{j+n,i+n})=(1+tE_{ij})(1-tE_{n+j,n+i})$ with $1\le i<j\le n$ and $t\in {\Bbb Z}_p/p^{-v_p(\alpha_{ij}(s))}{\Bbb Z}_p$, i.e. by performing (two) elementary row operations we can achieve that all entries $g_{ij}$ with $1\le i<j \le n$ are contained in $p^{-v_p(\alpha_{ij}(s))}{\Bbb Z}_p$. (First eliminate the entries in the most right column in $A_-$, then eliminate the entries in column $n-1$ and so on; note that the $\delta_i=g_{ii}$ are units in ${\Bbb Z}_p$.) Analogously, multiplying from the left by matrices $1+t(E_{n+i,j}+E_{n+j,i})=(1+tE_{n+i,j})(1+tE_{n+j,i})$ with $i\le j$ and $t\in{\Bbb Z}_p/p^{-v_p(\beta_{ij}(s))}{\Bbb Z}_p$ we can achieve that $g_{n+i,j}\in p^{-v_p(\beta_{ij}(s))}{\Bbb Z}_p$ for all $1\le i\le j\le n$. We note that the matrices by which we multiplied are contained in ${\cal V}_s$ because they are contained in $1$-parameter subgroups corresponding to negative roots (namely $\alpha_{ij},\beta_{ij}$; cf. the table in section (4.1)).
b.) Now, multiplying further from the left
- by matrices $1+t(E_{ij}-E_{j+n,i+n})$ with $i<j$ and $t\in p^{-v_p(\alpha_{ij}(s))}$ we can achieve $A_-=0$
- by matrices $1+t(E_{i+n,j}+E_{j+n,i})$ with $i\le j$, $t\in p^{-v_p(\beta_{ij}(s))}$ we can achieve $C=0$ and $c_{11}=0,\ldots,c_{nn}=0$
- by matrices $1+t(E_{ij}-E_{j+n,i+n})$ with $i>j$, $p\in p{\Bbb Z}_p$, we can achieve $A_+=0$
- by matrices $1+t(E_{i,j+n}+E_{j,i+n})$ with $j\ge i$, $p\in p{\Bbb Z}_p$, we can achieve $B=0$ and $b_{11}=0,\ldots,b_{nn}=0$
- by matrices $(\delta_1^{-1},\ldots,\delta_n^{-1},\delta_1,\ldots,\delta_n)$, $\delta_i\in{\Bbb Z}_p^*$, we can achieve $\delta_1=\cdots=\delta_n=1$.
We note that the matrices by which we multiplied are contained in ${\cal I}\cap s^{-1}{\cal I}s$ by equations (2) and (3). Since ${\sf A}=1$, the first relation in equation (1) in section 4 implies ${\sf B}={\sf B}^t$, hence, ${\sf B}=0$. The last relation in equation (1) in section 4 then implies ${\sf D}=1$ and the second relation in (1) in section 4 finally implies ${\sf C}={\sf C}^t$, hence, ${\sf C}=0$. Thus, we have found that $$ \prod_j k_j \prod_i v_i \, g=1 $$ with certain $k_j\in{\cal I}\cap s^{-1}{\cal I}s$ and $v_i\in{\cal V}_s$. Thus, $(\prod v_i) g$ is contained in $s^{-1}{\cal I}s\cap{\cal I}\backslash {\cal I}$ which shows that ${\cal V}_s$ is a system of representatives for $s^{-1}{\cal I}s\cap{\cal I}\backslash {\cal I}$.
Taking into account that the entries of a matrix $g=(g_{ij})\in s^{-1}{\cal I}s\cap{\cal I}$ satisfy equation (3) it is not difficult to verify that the elements in ${\cal V}_s$ are different modulo $s^{-1}{\cal I}s\cap{\cal I}$. Thus, the proof of the first part is complete.
2.) Since ${\rm Ad}(s_2)X_\alpha=\alpha(s_2)X_\alpha$ we find that $$ {\rm Ad}(s_2^{-1})({\cal V}_{s_1})=\{ 1+t_\alpha X_\alpha,\,\alpha\in \Phi^-,\,t_\alpha\in p^{v_p(\alpha(s_2^{-1}))}{\Bbb Z}_p/p^{-v_p(\alpha(s_1))+v_p(\alpha(s_2^{-1}))}{\Bbb Z}_p\}. $$ Since $v_p(\alpha(s_2^{-1}))=-v_p(\alpha(s_2))$ and $-v_p(\alpha(s_1))+v_p(\alpha(s_2^{-1}))=-v_p(\alpha(s_1s_2))$ an easy calculation yields the claim. This finishes the proof of the Lemma.
{\bf (5.3) The local Hecke algebra at $p$. } We set $$ D_p={\cal I}{\bf T}({\Bbb Q}_p)^+{\cal I}\,\le{\bf G}({\Bbb Q}_p). $$ In the Proposition below we will see that $D_p$ is a semigroup, hence, we can define the local Hecke algebra $$ {\cal H}({\cal I}\backslash D_p/ {\cal I}) $$ attached to the pair $(D_p,{\cal I})$. For any $s\in {\bf T}({\Bbb Q}_p)^+$ we define the element $T_s={\cal I}s{\cal I}$, i.e. ${\cal H}({\cal I}\backslash D_p/ {\cal I})$ is the ${\Bbb Z}$-linear span of the elements $T_s$, $s\in {\bf T}({\Bbb Q}_p)^+$.
{\bf Proposition. }{\it 1.) $D_p\le {\bf G}({\Bbb Q}_p)$ is a semi group.
2.) For all $s_1,s_2\in{\bf T}({\Bbb Q}_p)^+$ we have $T_{s_1}T_{s_2}=T_{s_1s_2}$. In particular, the Hecke algebra ${\cal I}\backslash D_p/{\cal I}$ is commutative.
}
{\it Proof. } 1.) We compute $$ {\cal I}s_1{\cal I}s_2{\cal I}=\bigcup_{v\in{\cal V}_{s_1}} {\cal I}s_1 v s_2{\cal I}=\bigcup_{v\in{\cal V}_{s_1}} {\cal I}s_1 s_2 {\rm Ad}(s_2^{-1})(v) {\cal I} $$ Since $s_2\in {\bf T}({\Bbb Q}_p)^+$ we know that $v_p(\beta(s_2^{-1}))\le 0$ for all positive roots $\beta$. Hence, we obtain for any $v=1+t_\alpha X_\alpha\in{\cal V}_{s_1}$: $$ {\rm Ad}(s_2^{-1})(v)=1+t_\alpha \alpha(s_2^{-1}) X_\alpha\in {\bf B}({\Bbb Z}_p)\subset{\cal I} $$ (note that by definition of ${\cal V}_{s_1}$ the root $\alpha$ is negative). Thus, ${\rm Ad}(s_2^{-1})(v)\in{\cal I}$ and we obtain ${\cal I}s_1{\cal I}s_2{\cal I}={\cal I}s_1s_2{\cal I}$. Thus $D_p$ is closed under multiplication.
2.) To prove the second claim we compute using part 2.) of the previous Lemma $$ T_{s_1}T_{s_2}=\bigcup_{v\in {\cal V}_{s_1},w\in{\cal V}_{_2}}{\cal I}s_1vs_2w= \bigcup_{v\in {\cal V}_{s_1},w\in{\cal V}_{s_2}} {\cal I}s_1s_2{\rm Ad}(s_2^{-1})(v)w=\bigcup_{z\in {\cal V}_{s_1s_2}}{\cal I}s_1s_2 z=T_{s_1s_2}. $$ Thus, the proof of the Proposition is complete.
{\bf (5.4) The adelic Hecke Algebra. } We fix an integer $N$ which is not divisible by $p$. We select a compact open subgroup $U=\prod_{\ell\not=\infty} U_\ell$ of ${\bf G}(\hat{\Bbb Z})$ and a sub semigroup $D=\prod_{\ell\not=\infty} D_\ell$ of ${\bf G}({\Bbb A}_f)$ as follows. For all primes $\ell$ not dividing $pN$ we set $U_\ell={\bf G}({\Bbb Z}_\ell)$ and $D_\ell={\bf G}({\Bbb Q}_\ell)$; at the prime $p$ we define $U_p={\cal I}$ and $D_p={\cal I}{\bf T}({\Bbb Q}_p)^+{\cal I}$ as in the previous section;
for primes $\ell|N$ we only assume that $U_\ell\le{\bf G}({\Bbb Z}_\ell)$ is compact open and ${\rm det}(U_\ell)={\Bbb Z}_\ell^*$ and we set $D_\ell=U_\ell$. We denote by $$ {\cal H}(U_\ell\backslash D_\ell/U_\ell)\quad\mbox{resp.}\quad {\cal H}(U\backslash D/U) $$ the local resp. global (adelic) Hecke algebra attached to the pair $(U_\ell,D_\ell)$ resp. $(U,D)$. We exhibit a set of generators for ${\cal H}(U\backslash D/U)$ as follows. We define $$ \Sigma_\ell^+=\{{\rm diag}(\ell^{e_1},\ldots,\ell^{e_n},\ell^{-e_1},\ldots,\ell^{-e_n}),\,e_i\in{\Bbb Z},\,0\le e_1\le e_2\le\cdots\le e_n\} $$
if $\ell\not|N$ and $\Sigma^+_\ell=\{1\}$ if $\ell|N$. Thus, $v_\ell(\alpha(h))\ge 0$ for all $h\in\Sigma_\ell^+$ and all $\alpha\in\Delta$. Any element $T=U_\ell s U_\ell$ in ${\cal H}(U_\ell\backslash D_\ell/U_\ell)$ has a representative $s\in\Sigma_\ell^+$ (for primes $\ell$ not dividing $Np$ this is well known and for primes $\ell$ dividing $Np$ this is immediate from the definition of $U_\ell$ and $D_\ell$.) In particular, the ${\Bbb Z}$-algebra ${\cal H}(U\backslash D/U)$ is generated by the following elements $$ {\cal H}(U\backslash D/U)=\langle UsU,\,s\in\bigcup_\ell \Sigma_\ell^+ \rangle. \leqno(4) $$
Moreover, ${\cal H}(U\backslash D/U)$ is commutative, because this holds locally at all primes: for primes $\ell\not|Np$ again this is well known,
for primes $\ell|N$ this is trivial and for $p$ this follows from (5.3) Proposition.
{\bf (5.5) The global (non-adelic) Hecke algebra. } We set $$ \Gamma=U\cap {\bf G}({\Bbb Q})\quad\mbox{and}\quad \Delta=D\cap {\bf G}({\Bbb Q}). $$ Thus, $\Gamma$ satisfies the following local condition at the prime $p$: $$ \Gamma\le{\cal I}\,(\le {\bf G}({\Bbb Q}_p)).\leqno(5) $$ We note that $U\subset D$, hence, $\Gamma\subset \Delta$. In the next Lemma we compare the Hecke algebras attached to the pairs $(D,U)$ and $(\Delta,\Gamma)$ $$ \begin{array}{ccc} \Gamma&\subset&\Delta\\ \cap&&\cap\\ U&\subset&D.\\ \end{array} $$
{\bf (5.5.1) Lemma. }{\it The canonical map $\Gamma\alpha\Gamma\mapsto U\alpha U$ induces an isomorphism of rings $$ {\cal H}(\Gamma\backslash \Delta/\Gamma)\rightarrow {\cal H}(U\backslash D/U). $$ }
{\it Proof. } According to [M], Theorem 2.7.6, p. 72, we have to show that
i) $D=\Delta U$
ii) $U\alpha U=U\alpha\Gamma$ for all $\alpha\in \Delta$
iii) $U\alpha\cap \Delta=\Gamma\alpha$ for all $\alpha\in \Delta$.
i) is an immediate consequence of strong approximation which holds since ${\rm det}(U)=\hat{\Bbb Z}^*$.
We prove ii). The inclusion "$\supseteq$" is obvious. To prove the reverse inclusion we note that $U\alpha U=\bigcup_v U\alpha v$, where $v$ runs over a system of representatives of $\alpha^{-1}U\alpha\cap U\backslash U$. Thus, we have to show that $\Gamma$ contains a system of representatives of $\alpha^{-1}U\alpha\cap U\backslash U$. Let $u\in U$ be arbitrary. Since $\alpha^{-1}U\alpha\cap U\le {\bf G}({\Bbb A}_f)$ is a compact open subgroup, strong approximation yields $u=\gamma v$ with $\gamma\in{\bf G}({\Bbb Q})$ and $v\in \alpha^{-1}U\alpha\cap U$. Hence, $\gamma=uv^{-1}$ is contained in $U\cap {\bf G}({\Bbb Q})=\Gamma$. Thus, $\gamma$ is a representative of the coset of $u$ in $\alpha^{-1}U\alpha\cap U\backslash U$. It remains to verify that strong approximation holds with respect to $\alpha^{-1}U\alpha\cap U$, i.e. ${\rm det}\,(\alpha^{-1}U\alpha\cap U)=\hat{\Bbb Z}^*$. It is sufficient to prove this locally for all primes $\ell$.
If $\ell|N$ we know by definition of $U_\ell,D_\ell$ that $\alpha\in D_\ell=U_\ell$, hence, ${\rm det}(U_\ell\cap \alpha^{-1}U_\ell\alpha)={\Bbb Z}_\ell^*$
because the determinant is surjective on $U_\ell$. If $\ell\not|N$, then the Cartan decomposition in case $\ell\not|Np$ and the definition of $D_p$ show that $\alpha\in D_\ell$ can be written $\alpha=u_1tu_2$ where $u_1,u_2\in U_\ell$ and $t\in D_\ell$ is a diagonal matrix. We obtain ${\rm det}(U_\ell\cap \alpha^{-1}U_\ell\alpha)={\rm det}(U_\ell\cap t^{-1}U_\ell t)$. Let $\lambda\in{\Bbb Z}_\ell^*$ arbitrary; since $s={\rm diag}(\lambda,1,\ldots,1)\in U_\ell$ commutes with $t$ we see that it is contained in $U_\ell\cap t^{-1}U_\ell t$. Since ${\rm det}(s)=\lambda$ we obtain ${\rm det}(U_\ell\cap \alpha^{-1} U_\ell\alpha)={\Bbb Z}_\ell^*$. Thus, strong approximation holds and ii) is proven. Finally, iii) is immediate since $\Gamma$ is contained in $U$ and in $\Delta$ and since $U\cap {\bf G}({\Bbb Q})=\Gamma$.
Thus, the proof of the lemma is complete.
{\bf (5.5.2)} Since the (adelic) Hecke algebra attached to $(D,U)$ is commutative (cf. section (5.4)), we obtain from the above Lemma that ${\cal H}(\Gamma\backslash \Delta/\Gamma)$ is a commutative algebra. For $s\in \Delta$ we set $$ T_s=\Gamma s\Gamma; $$ equation (4) in (5.4) and (5.5.1) Lemma imply that ${\cal H}(\Gamma\backslash \Delta/\Gamma)$ is generated by the following elements $$ {\cal H}(\Gamma\backslash \Delta/\Gamma)=\langle \Gamma s\Gamma,\;s\in\bigcup_\ell \Sigma_\ell^+ \rangle $$
(note that $\Sigma_\ell^+\subseteq \Delta$ for all primes $\ell$). We define the element $$ h_p={\rm diag}(p^1,p^2,\ldots,p^n,p^0,p^{-1},\ldots,p^{-n+1})\in\Sigma_p^+, $$ hence, $\alpha(h_p)=p$ for all $\alpha\in \Delta$
and we denote the corresponding Hecke operator by $$ T_p=T_{h_p}=\Gamma h_p\Gamma. $$ Since $T_p$ maps to ${\cal I}h_p{\cal I}\in {\cal H}({\cal I}\backslash D_p/{\cal I})\le {\cal H}(U\backslash D/U)$ under the isomorphism in (5.5.1) Lemma, (5.3) Proposition part 2.) implies that $$ T_p^e=T_{h_p}^e=T_{h_p^e}.\leqno(6) $$
{\bf (5.6) The Hecke algebra ${\cal H}$. } We define the ${\Bbb Z}$-algebra ${\cal H}_{\Bbb Z}$ as the ${\Bbb Z}$-subalgebra of ${\cal H}(\Gamma\backslash \Delta/\Gamma)$
which is generated by the Hecke operators $T_s$ with $s\in \bigcup_{\ell\not|Np} \Sigma_\ell^+\cup\{h_p\}$; hence, $$
{\cal H}_{\Bbb Z}\cong\bigotimes_{\ell\not|Np} {\cal H}(U_\ell\backslash D_\ell/U_\ell)\otimes{\Bbb Z}[T_p]. $$
We set $$
\Sigma^+:=\prod_{\ell\not|Np}\Sigma_\ell^+\cdot\{h_p^m,\,m\in{\Bbb N}_0\}. $$ As a ${\Bbb Z}$-module, ${\cal H}_{\Bbb Z}$ then is generated by the operators $T_s$ with $s\in\Sigma^+$. Finally, we put ${\cal H}={\cal H}\otimes_{\Bbb Z} \bar{\Bbb Q}_p$ and ${\cal H}_{\cal O}={\cal H}_{\Bbb Z}\otimes{\cal O}$.
\section{The cohomology groups ${\bf H}_\lambda$ (attached to the Siegel upper half plane)}
We recall the normalization of the Hecke operators which leads to an action of the Hecke algebra on the $p$-adically integral cohomology. We also state the simple topological trace formula of Bewersdorff.
{\bf (6.1) Normalization of Hecke algebra representations. } We keep the notations from section 5. In particular, $\Gamma=U\cap{\bf G}({\Bbb Q})\le {\bf G}({\Bbb Z})$ with $U\le{\bf G}(\hat{\Bbb Z})$ defined as in (5.4) and
${\cal H}_{\Bbb Z}\cong\bigotimes_{\ell\not|Np} {\cal H}(U_\ell\backslash D_\ell/U_\ell)\otimes{\Bbb Z}[T_p]$ is the Hecke algebra defined in (5.6). To ensure that the Hecke algebra later will act on $p$-adically integral cohomology, we have to normalize the action of the Hecke algebra. This depends on a choice of a dominant character $\lambda\in X({\bf T})$. We first define a ${\Bbb Z}$- algebra morphism $$ \begin{array}{cccc}
\varphi_\lambda=\otimes_{\ell\not|N}\varphi_\ell:&{\cal H}_{\Bbb Z}&\rightarrow&{\cal H}_{\Bbb Z}\\ &T&\mapsto&\{T\}_\lambda\\ \end{array} $$
as follows. For all primes $\ell\not|Np$ we denote by $\varphi_{\lambda,\ell}:\,{\cal H}(U_\ell\backslash D_\ell/U_\ell)\rightarrow {\cal H}(U_\ell\backslash D_\ell/U_\ell)$ the identity map. At the prime $p$ we note that ${\cal H}(U_p\backslash D_p/U_p)={\Bbb Z}[T_p]$ is a polynomial algebra generated by $T_p$. We define $\varphi_{\lambda,p}:\,{\cal H}(U_p\backslash D_p/U_p)\rightarrow {\cal H}(U_p\backslash D_p/U_p)$ by sending $T_p=T_{h_p}$ to $\{T_p\}_\lambda:=\lambda(h_p)T_p$. Equation (6) in (5.5) implies that $\varphi_\lambda(T_{h_p^e})=\varphi_\lambda(T_p^e)=\lambda(h_p^e)T_p^e$. Moreover, since $\lambda$ is dominant and $h_p\in\Sigma_p^+$ we obtain $\lambda(h_p)\in{\Bbb Z}$, hence, $\varphi_\lambda(T_p)=\lambda(h_p) T_p\in{\cal H}_{\Bbb Z}$ and $\varphi_\lambda$ is defined over ${\Bbb Z}$.
Tensoring with $\bar{\Bbb Q}_p$ we obtain a $\bar{\Bbb Q}_p$-algebra morphism $$ \varphi_\lambda:\,{\cal H}\rightarrow{\cal H} $$ which is defined over ${\cal O}$.
Let ${\bf H}$ be a ${\cal H}$-module. We define the $\lambda$-normalization "$\cdot_\lambda$" of the action of ${\cal H}$ on ${\bf H}$ by composing the ${\cal H}$-module structure on ${\bf H}$ with the ${\bar{\Bbb Q}_p}$-algebra morphism $\varphi_\lambda$, i.e. $$ T\cdot_\lambda v=\{T\}_\lambda v\qquad(T\in{\cal H},\; v\in H). $$ Thus, $$
T_h\cdot_\lambda v=T_h v\quad\mbox{if $h\in\Sigma_\ell^+$, $\ell\not|Np$}\quad\mbox{and}\quad T_p^e\cdot_\lambda v=\lambda(h_p^e)T_p^e v. $$
{\bf (6.2) The Cohomology groups. } We select a maximal compact subgroup $K_\infty$ of the connected component of the identity of ${\bf G}({\Bbb R})$ and we denote by $X={\bf G}({\Bbb R})/K_\infty {\bf Z}({\Bbb R})$ the symmetric space. We denote by ${\cal L}_\lambda$ the sheaf on $\Gamma\backslash \bar{X}$ attached to the irreducible ${\bf G}(\bar{\Bbb Q}_p)$-module $L_\lambda$ of highest weight $\lambda\in X({\bf T})$ (cf. (4.3)). The cohomology groups $$ {\bf H}_\lambda=H^d(\Gamma\backslash X,{\cal L}_\lambda) $$ then are modules under the Hecke algebra ${\cal H}$. From now on, by $d=d_n$ we will always understand the {\it middle degree} of the locally symmetric space $\Gamma\backslash X$. We denote by $$ {\bf H}_{\lambda,{\cal O}}=H^d(\Gamma\backslash X,{\cal L}_\lambda)_{\rm int} $$ the image of $H^d(\Gamma\backslash X,{\cal L}_\lambda({\cal O}))$ in $H^d(\Gamma\backslash X,{\cal L}_\lambda)$. $H^d(\Gamma\backslash X,{\cal L}_\lambda)_{\rm int}$ is a ${\cal O}$-lattice in $H^d(\Gamma\backslash X,{\cal L}_\lambda)$. On the other hand, the normalized Hecke operators $\lambda(h)T_h$, $h\in\Sigma^+$, act on cohomology with integral coefficients $H^d(\Gamma\backslash X,{\cal L}_\lambda({\Bbb Z}))$
(cf. e.g. the proof of (7.2) Lemma below; cf. also [Ma 1], (5.4) Lemma for more details). If $h\in\Sigma_\ell^+$ with $\ell\not|Np$ then $\lambda(h)\in{\cal O}^*$
is a $p$-adic unit, hence, the not normalized Hecke operator $T(h)$ already acts on cohomology with $p$-adically integral coefficients $H^d(\Gamma\backslash X,{\cal L}_\lambda({\cal O}))$. Thus, we only have to normalize the Hecke operator at the prime $p$ i.e. we obtain that w.r.t. the $\lambda$-normalized action the Hecke algebra ${\cal H}_{\cal O}$ acts on cohomology with $p$-adically integral coefficients and hence, acts on $H^d(\Gamma\backslash X,{\cal L}_\lambda)_{\rm int}$. Thus, ${\bf H}_\lambda$ is a finite dimensional ${\cal H}$-module which is defined over ${\cal O}$ with respect to the lattice ${\bf H}_{\lambda,{\cal O}}$.
{\bf (6.3) Slope subspace of cohomology. } We define the slope $\le\beta$ subspace $$ {\bf H}_\lambda^{\le\beta}=H^d(\Gamma\backslash X,{\cal L}_\lambda)^{\le\beta} $$ with respect to the action of the {\it normalized} Hecke operator $\{T_p\}_\lambda=\lambda(h_p)T_p$. In [Ma 1], 6.5 Theorem it is proven using only elementary means from representation theory that there are natural numbers $M(\beta)$, $\beta\in{\Bbb Q}_{\ge 0}$, s. t. $$ {\rm dim}\,{\bf H}_\lambda^{\le\beta}\le \frac{1}{2}M(\beta)\quad\mbox{for all dominant $\lambda\in X({\bf T})$ and all $\beta\in{\Bbb Q}_{\ge 0}$}.\leqno(1) $$ Hence, the assumptions in (3.6) are satisfied.
{\bf (6.4) A simple topological Trace Formula. } We denote by $\sim_\Gamma$ the equivalence relation on ${\bf G}({\Bbb Q})$ defined by conjugation, i.e. $x\sim_\Gamma y$, $x,y\in{\bf G}({\Bbb Q})$, precisely if $x,y$ are conjugate by an element $\gamma\in\Gamma$ and we denote by $[\Xi]_\Gamma=[\Xi]$ the conjugacy class of $\Xi\in{\bf G}({\Bbb Q})$.
{\bf Theorem }(cf. [B]). {\it Let $h\in\Sigma^+$. There are integers $c_{[\Xi]}\in{\Bbb Z}$, $[\Xi]\in\Gamma h\Gamma/\sim_\Gamma$, such that the following holds. For all irreducible representations $L_\lambda$ we have $$
{\rm tr}\,(T_h|H^d(\Gamma\backslash X,{\cal L}_\lambda))=\sum_{[\Xi]\in\Gamma h \Gamma/\sim_\Gamma} c_{[\Xi]} \,{\rm tr}\,(\Xi^{-1}|L_\lambda).\leqno(2) $$
}
{\it Proof. } This is a direct consequence of 2.6 Satz in [B] taking into account that for regular weight $\lambda$ the cohomology of $\Gamma\backslash X$ vanishes in all degrees except for the middle degree $d$
{\bf (6.5) Remark. } 1.) We would like to emphasize that the proof of the simple trace formula of Bewersdorff is elementary. The only deeper ingredient is the existence of a good compactification of $\Gamma\backslash X$, which is the {\it Borel-Serre compactification}. Apart from that the proof only uses very general and basic principles of algebraic topology. (In [B], the formula in equation (2) only serves as a starting point for further investigations).
2.) The terms appearing on the geometric side of Bewersdorff's trace formula are the archimedean components of orbital integrals on the symplectic group.
\section{Verification of Identity $(\dag)$ in section (3.7)}
We show that the family of cohomology groups $(H^d(\Gamma\backslash X,{\cal L}_\lambda))$ satisfies equation $(\dag)$ in (3.7). This is the main technical work. Use of Bewersdorff's trace formula reduces the verification of equation ($\dag$) to congruencs between values of irreducible characters and use of the { \it Weyl character formula} further reduces to a problem about conjugacy of certain symplectic matrices with which we shall begin.
We keep the notations from section 6. In particular, $\Gamma={\bf G}({\Bbb Q})\cap U\le{\bf G}({\Bbb Z})$ is an arithmetic subgroup as in (5.4), hence, $\Gamma\le {\cal I}\,(\le {\bf G}({\Bbb Q}_p))$ and ${\cal H}_{\cal O}={\cal H}(\Gamma\backslash\Delta /\Gamma)\otimes{\cal O}$ is the Hecke algebra defined in(5.6); in particular $T_p=T_{h_p}$, where $h_p$ is the diagonal matrix defined in (5.5.2).
{\bf (7.1) Lemma. }{\it Let $\gamma\in\Gamma$ and $h\in{\bf T}({\Bbb Z}_p)$. The matrix $h^{-1}h_p^{-e}\gamma^{-1}\in{\bf G}({\Bbb Q}_p)\le{\bf GL}_{2n}({\Bbb Q}_p)$, $e\in{\Bbb N}$, is ${\bf G}(\bar{\Bbb Q}_p)$-conjugate to a diagonal matrix $$ \xi={\rm diag}(\xi_1,\ldots,\xi_{2n})\in{\bf T}({\Bbb Q}_p) $$ satisfying $v_p(\alpha(\xi))=-e$ for all $\alpha\in \Delta$.
}
{\it Proof. } We proceed in several steps.
a.) We begin by writing $\gamma=(\gamma_{ij})$ and $h={\rm diag}(h_1,\ldots,h_{2n})$; note that the entries of $h$ as well as the diagonal entries $\gamma_{ii}$ of $\gamma$ are $p$-adic units. We set $$ \underline{h}_p={\rm diag}(p^{(n-1)e},p^{(n-2)e},\ldots,p^0,p^{ne},p^{(n+1)e},\ldots,p^{(2n-1)e}), $$ hence, $\underline{h}_p$ differs from $h_p^{-e}$ by a scalar multiple $p^{ne}$ and has integer entries. In particular, the matrices $h^{-1}h_p^{-e}\gamma^{-1}$ and $A=h^{-1}\underline{h}_p\gamma^{-1}$ differ by a scalar factor $p^{ne}$, hence, we may replace $h^{-1}h_p^{-e}\gamma^{-1}$ by $A$. We denote by $$ \chi(T)=T^{2n}+c_1T^{2n-1}+\cdots+c_{2n}\in{\Bbb Q}_p[T] $$ the characteristic polynomial of $A$. We write $A=(a_{ij})$. Since $\gamma^{-1}\in\Gamma\le{\cal I}$, we obtain $$ v_p(a_{ij})\ge \left\{ \begin{array}{cc} (n-i)e&i\le n\\ (i-1)e&i>n. \end{array} \right.\leqno(1) $$ More precisely, we find $$ v_p(a_{ii})= \left\{ \begin{array}{cc} (n-i)e&i\le n\\ (i-1)e&i>n \end{array} \right.\leqno(2) $$
and $$ v_p(a_{ij})\ge \left\{ \begin{array}{cc} (n-i)e+1&i\le n\\ (i-1)e+1&i>n \end{array} \right.\leqno(3) $$ if $a_{ij}$ is one of the entries "$\mu_{ij}$, $\rho_{ij}$ or $\mu'_{ij}$" below, i.e. if ($j<i\le n$) or ($i\le n$ and $j>n$) or ($i,j>n$ and $j>i$) $$ A=\left( \begin{array}{ccccccc}
\tau_1&&\beta_{ij}&|&&&\\
&\ddots&&|&&\rho_{ij}&\\
\mu_{ij}&&\tau_n&|&&&\\ -&-&-&-&-&-&-\\
&&&|&\tau_{n+1}&&\mu'_{ij}\\
&*&&|&&\ddots&\\
&&&|&\beta'_{ij}&&\tau_{2n}\\ \end{array} \right).\leqno(4) $$ We recall that $$ \chi(T)={\rm det}\,(A-T{\bf I})=\sum_{\pi\in S_{2n}} {\rm sgn}(\pi)\,\prod_i (a_{i,\pi(i)}-\delta_{i,\pi(i)}T). $$ A permutation $\pi\in S_{2n}$ contributes to $c_i$ (i.e. the summand corresponding to $\pi$ contributes in degree $T^{2n-i}$) only if $\pi$ has at least $2n-i$ fixed points. We denote by ${\rm Fix}(\pi)$ the set of fixed points of $\pi$ and obtain $$
c_i=\sum_{\pi\in S_{2n}\atop|{\rm Fix}(\pi)|\ge 2n-i} \sum_{I\subseteq {\rm Fix}(\pi) \atop |I|=2n-i} {\rm sgn}(\pi)\,\prod_{i=1\atop i\not\in I}^{2n} a_{i\pi(i)}. $$
In particular, the coefficient $c_i$ of $\chi$ is a sum of terms of the form $$ \pm \prod_{i\in J} a_{i\sigma(i)},\leqno(5) $$ where $J\subseteq\{1,\ldots,2n\}$ is a subset of cardinality $i$ and $\sigma\in {\rm Sym}(J)$ is a permutation of $J$ ($J$ is the complement of $I$;
note that $\pi(I)=I$, hence, the complement $J$ of $I$ too is $\pi$-invariant and $\sigma=\pi|_J$ is the restriction of $\pi$ ).
b.) We select $i\in\{1,\ldots,2n\}$ and look closer at the coefficient $c_i$. We first assume $i\le n$ and we define the subset
$J_{\rm min}=\{n-i+1,n-i+2,\ldots,n\}$; note that $|J_{\rm min}|=i$ and let $\sigma\in {\rm Sym}(J_{\rm min})$. If $\sigma={\rm id}_{J_{\rm min}}$ then (5) yields a term which has $p$-adic value $e\,\sum_{k=0}^{i-1} k$ by equation (2). If $\sigma\not={\rm id}_{J_{\rm min}}$ then $\sigma$ picks at least one entry "$\mu_{ij}$" below the diagonal (cf. equation (4)) which is divisible by one more $p$ (cf. equation (3)) and, hence, the $p$-adic value of the term in equation (5) is larger than $e\,\sum_{k=0}^{i-1} k$. Thus, $$ \sum_{\sigma\in{\rm Sym}(J_{\rm min})} (\pm 1) \prod_{i\in J_{\rm min}} a_{i\sigma(i)} $$
has $p$-adic value $e\,\sum_{k=0}^{i-1} k$. If $J\not=J_{\rm min}$ with $|J|=i$, then equation (1) implies that $\prod_{i\in J} a_{i\sigma(i)}$ has $p$-adic value bigger than $e\,\sum_{k=0}^{i-1} k$ for any $\sigma\in{\rm Sym}(J)$. Thus, we obtain $$ v_p(c_i)=e\,\sum_{k=0}^{i-1} k,\quad i=1,\ldots,n. $$ Next we assume $i>n$ and we define the subset $J_{\rm min}=\{1,\ldots,n,\ldots,i\}$. We claim: if $\sigma\in{\rm Sym}(J_{\rm min})$ is not the identity then there is $i_0\in J_{\rm min}$ such that $$ v_p(a_{i_0,\pi(i_0)})\ge
\left\{ \begin{array}{ccc} (n-i_0)e+1&\mbox{if}&i_0\le n\\ (i_0-1)e+1&\mbox{if}&i_0>n.\\ \end{array} \right.\leqno(6) $$ To prove the claim we assume that $\sigma\in{\rm Sym}(J_{\rm min})$ is a permutation such that equation (6) does not hold. Equation (3) then implies that $\sigma(i)\le n$ for all $i\le n$ ("$\rho_{ij}$" has $p$-adic value greater than or equal to $(n-i)e+1$).
Thus, $\sigma$ maps $\{1,\ldots,n\}$ to itself and also maps $\{n+1,\ldots,i\}$ to itself, i.e. $\sigma$ defines permutations $\sigma|_{\{1,\ldots,n\}}$ resp. $\sigma|_{\{n+1,\ldots,i\}}$ of $\{1,\ldots,n\}$ resp. of $\{n+1,\ldots,i\}$. Since equation (6) does not hold, equation (3) further implies that
$\sigma|_{\{1,\ldots,n\}}$ and $\sigma|_{\{n+1,\ldots,i\}}$ are the identity. Hence, $\sigma$ is the identity which proves the claim. As above we then obtain for all $\sigma\in{\rm Sym}(J_{\rm min})$, $\sigma\not={\rm id}$, that $$ v_p(\prod_{j\in J_{\rm min}} a_{i,\sigma(i)}) >v_p(\prod_{i\in J_{\rm min}} a_{i,i})=e\,\sum_{k=0}^{i-1} k. $$ Thus, $$ v_p(\sum_{\sigma\in{\rm Sym}(J_{\rm min})}(\pm 1) \prod_{i\in J_{\rm min}} a_{i\sigma(i)})=e\,\sum_{k=0}^{i-1} k, $$
As in the case $i\le n$ equation (1) implies that for any $J\not=J_{\rm min}$, $|J|=i$, and any $\sigma\in{\rm Sym}(J)$ $$ v_p(\prod_{i\in J} a_{i\sigma(i)})>e\,\sum_{k=0}^{i-1} k. $$ Thus, $$ v_p(c_i)=e \sum_{k=0}^{i-1} k $$ for all $i=n+1,\ldots,2n$. Hence, this holds for all $i=1,\ldots,2n$.
c.) In particular, the Newton polygon of $\chi$ consists of $2n$ segments which have slopes $0,e,2e,\ldots,(2n-1)e$.
Thus, there are $2n$ roots $\lambda_1',\ldots,\lambda_{2n}'\in\bar{\Bbb Q}_p$ of $\chi$ which have $p$-adic valuations $0,e,2e,\ldots,(2n-1)e$. Since $h^{-1}h_p^{-e}\gamma^{-1}$ and $A$ differ by a scalar factor $p^{ne}$, we deduce that $h^{-1}h_p^{-e}\gamma^{-1}$ has $2n$ pairwise different eigenvalues $\lambda_1,\ldots,\lambda_{2n}$ with $p$-adic values $-ne,-(n-1)e,\ldots,(n-1)e$.
d.) We claim that $\chi$ splits over ${\Bbb Q}_p$ as a product of linear factors. In fact, if $\chi$ does not split completely then there is an irreducible factor of $\chi$ of degree $\ge 2$, hence, there are roots $\lambda_i,\lambda_j$ which are conjugate by an automorphism $\sigma\in{\rm Aut}(\bar{\Bbb Q}_p/{\Bbb Q}_p)$. This would imply that $\lambda_i,\lambda_j$ have the same $p$-adic value which is a contradiction. Thus, $\chi$ is split over ${\Bbb Q}_p$, hence, all roots $\lambda_i$ are contained in ${\Bbb Q}_p$.
e.) In c.) we have seen that the image of $h^{-1}h_p^{-e}\gamma^{-1}$ under ${\bf G}({\Bbb Q}_p)\subseteq{\bf GL}_{2n}({\Bbb Q}_p)$ has $2n$ different eigenvalues. Thus, $h^{-1}h_p^{-e}\gamma^{-1}$ is a semi simple element in ${\bf GL}_{2n}({\Bbb Q}_p)$ and, hence, in ${\bf G}({\Bbb Q}_p)$.
In particular, $h^{-1}h_p^{-e}\gamma^{-1}$ is (${\bf G}(\bar{\Bbb Q}_p)$-)conjugate to an element $$ \xi={\rm diag}(\xi_1,\ldots,\xi_{2n}) $$ in ${\bf T}(\bar{\Bbb Q}_p)$. Since conjugate matrices in ${\bf GL}_{2n}({\Bbb Q}_p)$ have the same eigenvalues, c.) implies that $\xi\in{\Bbb Q}_p$ and the $p$-adic values of the $\xi_i\in{\Bbb Q}_p$ are contained in the sequence $-ne,-(n-1)e,\ldots,(n-1)e$. Conjugating the regular element $\xi\in{\bf T}({\Bbb Q}_p)$ with a suitable element in the Weyl group ${\cal W}$ of ${\bf G}$ we may assume that $v_p(\alpha(\xi))<0$ for all $\alpha\in\Delta$.
This then implies that $$ (v_p(\xi_1),\ldots,v_p(\xi_{2n}))=(-e,-2e,\ldots,-ne,0,e,\ldots,(n-1)e) $$ which shows that $v_p(\alpha(\xi))=-e$ for all simple roots $\alpha$.
This completes the proof of the Lemma.
{\it Remark. } The entries $\xi_i$ of $\xi$ are even algebraic integers (contained in ${\Bbb Q}_p$).
{\bf (7.2) Lemma. } {\it Let $\lambda\in X({\bf T})$ be algebraic and dominant.
Then, for any $\gamma\in \Gamma$ and $h\in\Sigma^+\,(\subseteq{\bf T}({\Bbb Q}))$ we have $$
\lambda(hh_p^e)\,{\rm tr}(\pi_\lambda(h^{-1}h_p^{-e}\gamma^{-1})|L_\lambda)\in{\Bbb Z}. $$ }
{\it Proof. } Since $hh_p^e\gamma\in{\bf G}({\Bbb Q})$ we know that $\pi_\lambda(hh_p^e\gamma)$ leaves $L_\lambda({\Bbb Q})$ invariant. The subspace $L_\lambda({\Bbb Z})$
decomposes $$ L_\lambda({\Bbb Z})=\bigoplus_\mu L_\lambda(\mu,{\Bbb Z}), $$ where $\mu\in X({\bf T})$ runs over all weights of the form $$ \mu=\lambda-\sum_{\alpha\in\Delta} c_\alpha\alpha\leqno(7) $$ with $c_\alpha\in{\Bbb N}_0$ for all $\alpha\in \Delta$ and where ${\bf T}({\Bbb Q})$ acts on $L_\lambda({\mu},{\Bbb Z})\otimes{\Bbb Q}$ via $\mu$: $$ t v_\mu=\mu(t)v_\mu $$ for all $t\in {\bf T}({\Bbb Q})$ and all $v_\mu\in L_\lambda(\mu,{\Bbb Z})\otimes{\Bbb Q}$.
In particular, if $\mu$ is as in equation (7) then we obtain $$ \lambda(hh_p^e)\pi_\lambda(h^{-1}h_p^{-e})v_\mu=\prod_{\alpha\in\Delta} \alpha(h h_p^e)^{c_\alpha}.\leqno(8) $$
Since $h\in\Sigma^+$ we obtain $\alpha(h)\in {\Bbb Z}$ and equation (8) implies that $\lambda(hh_p^e)\,\pi_\lambda(h^{-1}h_p^{-e}\gamma^{-1})$ leaves ${L_\lambda}({\Bbb Z})$ invariant which yields the claim (note that $\gamma\in{\bf G}({\Bbb Z})$). Thus, the proof of the lemma is complete.
{\bf (7.3) } We denote by $h_\alpha$ the coroot corresponding to the root $\alpha$ and for any root $\alpha$ and any
$\lambda\in X({\bf T})$ we set $\langle \lambda,\alpha\rangle=\lambda(h_\alpha)$.
We further denote by $\omega_\alpha$ the fundamental dominant weights corresponding to the basis $\Delta$ and ${\cal W}$ is the Weyl group of ${\bf G}$.
We also write $\rho$ for the half sum of the positive roots and we put $w\cdot\lambda=w(\lambda+\rho)-\rho$.
{\bf Lemma. } {\it Let $\lambda\in X({\bf T})$ be a dominant character. For any $w\in {\cal W}$, $w\not=1$, we obtain $w\cdot\lambda=\lambda-\sum_{\alpha\in \Delta} c_\alpha \alpha$, where $c_\alpha=c_{\alpha,w}\in{\Bbb N}_0$ and $$ c_{\alpha_0}\ge \frac{\langle\lambda,\alpha_0\rangle}{2} $$ for at least one root $\alpha_0\in\Delta$. }
{\it Proof. } Since $w\lambda$ is a weight of the irreducible ${\bf G}$-module of highest weight $\lambda$ we know that $$ w\lambda=\lambda-\sum_{\alpha\in\Delta} b_\alpha\alpha $$ for certain $b_\alpha\in{\Bbb N}_0$. Since $\lambda$ is dominant we may write $\lambda=\sum_{\alpha\in\Delta} d_\alpha\omega_\alpha$ where $d_\alpha=\langle \lambda,\alpha\rangle\in{\Bbb N}_0$. On the other hand, $w\not=1$ implies that $w\lambda$ is not contained in the Weyl chamber corresponding to the basis $\Delta$, hence, $\langle w\lambda,\alpha_0\rangle\le 0$ for some root $\alpha_0\in\Delta$. We obtain $$ 0\ge \langle w\lambda,\alpha_0 \rangle=\langle \sum_{\alpha\in \Delta} d_\alpha \omega_\alpha -\sum_{\alpha\in\Delta} b_\alpha \alpha,\alpha_0 \rangle=d_{\alpha_0}-\sum_{\alpha\in\Delta} b_\alpha \langle \alpha,{\alpha_0}\rangle. $$ Since $\langle\alpha,\alpha_0\rangle=\alpha(h_{\alpha_0})=2$ if $\alpha=\alpha_0$ and $\langle\alpha,\alpha_0\rangle\le 0$ if $\alpha\not=\alpha_0$
this yields $0\ge d_{\alpha_0}-2b_{\alpha_0}$. Thus, $$ b_{\alpha_0}\ge \frac{1}{2} d_{\alpha_0}=\frac{1}{2}\langle \lambda,\alpha_0\rangle. $$ Since $w\cdot\lambda=w\lambda+w\rho-\rho$ and $$ w\rho-\rho=-\sum_{\alpha\in\Phi^+\atop \alpha\in w\Phi^-}\alpha $$
this yields the claim and the Lemma is proven.
{\bf (7.4) Congruences between characters values for different weights. } Using the {\it Weyl character formula} we obtain the following congruences between character values of irreducible algebraic representations of ${\bf G}$ for varying highest weights.
{\bf Proposition. }{\it Let $\lambda,\lambda'\in X({\bf T})$ be dominant characters and let $C\in{\Bbb Q}_{>0}$ such that $\langle\lambda,\alpha\rangle>2C$ and $\langle\lambda',\alpha\rangle>2C$ for all $\alpha\in\Delta$. If $\lambda\equiv\lambda'\pmod{(p-1)p^m X({\bf T})}$, then for any $\Gamma$-conjugacy class $[\Xi]_\Gamma\subseteq\Gamma hh_p^e\Gamma$, $e\in{\Bbb N}$, $h\in\Sigma^+$ the following congruence holds $$
\lambda(hh_p^{e}) {\rm tr}\,(\Xi^{-1}|{L_\lambda})\equiv \lambda'(hh_p^e) {\rm tr}\,(\Xi^{-1}|{L_{\lambda'}})\pmod{p^{{\rm min}(m+1,Ce)}}. $$ }
{\it Proof. } We note that by (7.2) Lemma $$
\lambda(hh_p^{e}) {\rm tr}\,(\Xi^{-1}|{L_\lambda}),\, \lambda'(hh_p^e) {\rm tr}\,(\Xi^{-1}|{L_{\lambda'}})\in{\Bbb Z}. $$ We may assume that the representative $\Xi$ of the $\Gamma$-conjugacy class $[\Xi]_\Gamma\subseteq \Gamma hh_p^e\Gamma$ is of the form $\Xi=\gamma h_p^eh$ for some $\gamma\in\Gamma$. By definition of $\Sigma^+$ we may write $h=h_{(p)}h_p^c$ with
$h_{(p)}\in\prod_{\ell\not|Np}\Sigma_\ell^+\subseteq {\bf T}({\Bbb Z}_p)$ and $c\in{\Bbb N}_0$. Hence, $$ \Xi=\gamma h_{(p)}h_p^{e'} $$ with $e'=e+c\ge e(>0)$. Thus, (7.1) Proposition implies that $\Xi^{-1}$ is ${\bf G}(\bar{\Bbb Q}_p)$-conjugate to an element $\xi\in {\bf T}({\Bbb Q}_p)$ satisfying $$ v_p(\alpha(\xi))=-e'\leqno(9) $$ for all $\alpha\in\Delta$.
Using the {\it Weyl character formula} we therefore obtain $$
\Delta(\xi)\cdot{\rm tr}\, (\Xi^{-1}|{L_\lambda})={\sum_{w\in W} (-1)^{\ell(w)}\,(w\cdot\lambda)(\xi)}, $$ where $$ {\Delta}(\xi)=\prod_{\alpha\in\Phi^+} (1-\alpha^{-1}(\xi)). $$
Note that equation (9) implies that $\Delta(\xi)$ is a $p$-adic unit, hence, $\Delta(\xi)\not=0$. Using (7.3) Lemma we can write $$ w\cdot \lambda=\lambda-\sum_{\alpha\in\Delta} c_{\alpha,w} \alpha $$ with $c_{\alpha,w}\in{\Bbb N}_0$ and $c_{\alpha_w,w}\ge \langle \lambda,\alpha_w \rangle/2$ for some root $\alpha_w\in\Delta$. We obtain $$
\lambda(hh_p^e)\,{\rm tr}\, (\Xi^{-1}|{L_\lambda})=\frac{\lambda(hh_p^e\xi)}{\Delta(\xi)} \,\left(1+\sum_{w\not=1} {\rm sgn}(w)\prod_{\alpha\in\Delta} \alpha(\xi)^{-c_{\alpha,w}}\right). $$ Since $v_p(\alpha^{-1}(\xi))=e'\ge 1$ for all $\alpha\in \Delta$ we find that $\Delta(\xi)\in{\Bbb Z}_p$ is a $p$-adic unit. Moreover, for any $\alpha\in\Delta$ we have $v_p(\alpha(hh_p^e))=e'$, hence, $$ v_p(\alpha(hh_p^e))=-v_p(\alpha(\xi)) $$ for all $\alpha\in\Delta$. In particular, this equality holds for all $\beta$ contained in the root lattice of ${\bf G}$ and since a (integral) multiple of any integral weight is contained in the root lattice we obtain $$ v_p(\chi(hh_p^e))=-v_p(\chi(\xi)) $$ for all $\chi\in X({\bf T})$. Thus, $\chi(hh_p^e\xi)\in{\Bbb Z}_p^*$ is a $p$-adic unit.
In particular, $\lambda(hh_p^e\xi)$ is a $p$-adic unit. Taking into account that $c_{\alpha,w}\ge 0$ for all $\alpha\in\Delta$, $w\in {\cal W}$ and that $c_{\alpha_w,w}\ge \langle \lambda,\alpha_w \rangle/2\ge C$ we thus obtain using equation (9) $$
\lambda(hh_p^e)\,{\rm tr}\, (\Xi^{-1}|{L_\lambda})\equiv \frac{\lambda(hh_p^e\xi)}{\Delta(\xi)}\pmod{p^{Ce'}{\Bbb Z}_p}.\leqno(10) $$
Since $\lambda\equiv\lambda'\pmod{(p-1)p^m X(\bf T)}$ there is a $\chi\in X(\bf T)$ such that $\lambda-\lambda'=(p-1)p^m\chi$. Taking into account that $\chi(hh_p^e\xi)$ is a $p$-adic unit this yields $$ \frac{\lambda(hh_p^e\xi)}{\lambda'(hh_p^e\xi)}
=\chi(hh_p^e\xi)^{(p-1)p^m}\in 1+p^{m+1}{\Bbb Z}_p. $$
Hence, $$ \lambda(hh_p^e\xi)\equiv\lambda'(hh_p^e\xi)\pmod{p^{m+1}{\Bbb Z}_p}. $$
Together with equation (10) which also holds with $\lambda$ replaced by $\lambda'$ we obtain $$
\lambda(hh_p^e)\,{\rm tr}\, (\Xi^{-1}|{L_\lambda})\equiv\lambda'(hh_p^e)\,{\rm tr}\, (\Xi^{-1}|{L_{\lambda'}})\pmod{p^{{\rm min}(m+1,Ce')}{\Bbb Z}_p}. $$ Since $e'\ge e$ this completes the proof.
{\bf (7.5) Congruences for Hecke operators in different weights. } Let $\beta\in{\Bbb Q}_{\ge 0}$. For any pair of dominant characters $\lambda,\lambda'\in X({\bf T})$ we denote by ${\bf e}_{\lambda,\lambda'}={\bf e}_{{\bf H}_\lambda,{\bf H}_{\lambda'}}^{\le\beta}$ the approximate idempotent projecting to the slope $\le\beta$ subspaces of ${\bf H}_\lambda$ and ${\bf H}_{\lambda'}$ as defined in (3.2); $\{{\bf e}_{\lambda,\lambda'}\}_\lambda$ resp. $\{{\bf e}_{\lambda,\lambda'}\}_{\lambda'}$ then is the approximate idempotent projecting to the slope subspaces ${\bf H}_\lambda^{\le\beta}$ and ${\bf H}_{\lambda'}^{\le\beta}$ which are now defined with respect to the normalized action of $T_p\in{\cal H}$ (cf. (6.3)).
{\bf Theorem. }{\it Let $C\in{\Bbb Q}_{>0}$. Assume that the dominant characters $\lambda,\lambda'\in X({\bf T})$ satisfy \begin{itemize}
\item $\langle \lambda,\alpha\rangle>2C$ and $\langle \lambda',\alpha\rangle>2C$ for all $\alpha\in \Delta$.
\item $\lambda\equiv\lambda'\pmod{(p-1)p^m X({\bf T})}$.
\end{itemize}
Then for all Hecke operators $T\in {\cal H}_{\cal O}$ and all slopes $\beta\in{\Bbb Q}_{\ge 0}$ the following congruence holds: $$
{\rm tr}\,(\{{\bf e}_{\lambda,\lambda'}^{\lceil\frac{m+1}{C}\rceil}T\}_\lambda|{\bf H}_\lambda)\equiv {\rm tr}\,(\{{\bf e}_{\lambda,\lambda'}^{\lceil\frac{m+1}{C}\rceil}T\}_{\lambda'}|{\bf H}_{\lambda'})\pmod{p^{\Box}}, $$ where $$ \Box=(1-\frac{\beta M(\beta)}{C})(m+1)-\beta M(\beta). $$ }
{\it Proof. } Since ${\cal H}_{\cal O}$ is generated as ${\cal O}$-module by the Hecke operators $T_h$, $h\in\Sigma^+$ (cf. (5.6)), we may assume that $T=T_{h}$ for some ${h}\in\Sigma^+$.
We write $h=h_p^fh_{(p)}$ with $f\in{\Bbb N}_0$ and $h_{(p)}\in\prod_{\ell\not|Np}\Sigma_\ell^+$ and we set $L=\lceil\frac{m+1}{C}\rceil$. We recall that ${\bf e}_{\lambda,\lambda'}=p(T_p)$, where the polynomial $p=\sum_{e=1}^{t}c_e X^e$ satisfies the following properties:
its degree $t$ is bounded by $M(\beta)$, its constant term $p(0)=0$ and ${\bf S}(p)\ge -\beta$ (cf. (3.3) Lemma). Since $\{{\bf e_{\lambda,\lambda'}}\}_\lambda=p(\{T_p\}_\lambda)$ we obtain $$ \{{\bf e}_{\lambda,\lambda'}^LT_h\}_\lambda=\{{\bf e}_{\lambda,\lambda'}\}_\lambda^L \{T_h\}_\lambda=p^L(\{T_p\}_\lambda)\{T_h\}_\lambda=\sum_{e=L}^{tL} b_e \{T_p\}_\lambda^e\{T_h\}_\lambda, $$ where $v_p(b_e)\ge -e\beta$. Since $\{T_p\}_\lambda^e=\lambda(h_p^e)T_p^e$, $\{T_h\}=\lambda(h_p^f)T_h$ and $T_hT_p^e=T_hT_{h_p^e}=T_{hh_p^e}$
this yields $$ \{{\bf e}_{\lambda,\lambda'}^LT_h\}_\lambda=\sum_{e=L}^{tL} b_e \lambda(h_p^{e+f}) T_{hh_p^e}. $$ Applying the Topological trace formula of Bewersdorff (cf. (6.4) Theorem) we obtain \begin{eqnarray*}
(3)\qquad{\rm tr}\,(\{T_h {\bf e}_{\lambda,\lambda'}^L\}_\lambda|{\bf H}_{\lambda})&=&\sum_{e=L}^{tL} b_e\,\lambda(h_p^{e+f})\,{\rm tr}\,(T_{hh_p^e}|{H^d(\Gamma\backslash X,{\cal L}_\lambda)})\\
&=&\sum_{e=L}^{tL} b_e \,\lambda(h_p^{e+f}) \sum_{[\Xi]\in\Gamma hh_p^e\Gamma/\sim_\Gamma} c_{[\Xi]}\, {\rm tr}\,(\Xi^{-1}|{L_\lambda}).\\ \end{eqnarray*} Let $[\Xi]\in\Gamma hh_p^e\Gamma/\sim_\Gamma$. Since $\lambda-\lambda'\in(p-1)p^m X({\bf T})$ we know that $\lambda'(h_{(p)})\lambda(h_{(p)})^{-1}\in 1+p^{m+1}{\Bbb Z}_p$
and (7.4) Proposition implies (note that $e\ge L\ge 1$) $$
\lambda(h_p^{e+f})\,b_e\,{\rm tr}\,(\Xi^{-1}|{L_\lambda}) \equiv \lambda'(h_p^{e+f})\,b_e\,{\rm tr}\,(\Xi^{-1}|{L_{\lambda'}})\pmod{p^{\S}},\leqno(4) $$ where $$ \S={\rm min}(m+1,Ce)-e\beta $$ (note that $v_p(b_e)\ge-e\beta$). Since $e\ge L\ge (m+1)/C$ we obtain $Ce\ge m+1$. Hence, $\S=m+1-e\beta$. Recalling that $e\le tL$, $t\le M(\beta)$ and $L\le \frac{m+1}{C}+1$ we further obtain $$ \S
\ge(1-\frac{\beta M(\beta)}{C})(m+1)-\beta M(\beta). $$ Thus, equation (3) (which also holds with $\lambda$ replaced by $\lambda'$) and equation (4) yield the claim. This completes the proof of the Proposition.
{\bf (7.6) $p$-Adic families of Siegel eigen classes. } We select a slope $\beta\in{\Bbb Q}_{\ge 0}$. The above Theorem immediately implies that the family of ${\cal H}$-modules $({\bf H}_\lambda)=(H^d(\Gamma\backslash\bar{X},{\cal L}_\lambda))$ satisfies equation ($\dag$) in (3.7), where $a'=\frac{1}{C}$,
$a=1-\frac{\beta M(\beta)}{C}$, $b=-\beta M(\beta)$. Thus, (3.7) Proposition yields the existence of $p$-adic continuous families of finite slope with (to simplify, we may omit a factor "$2$") $$ {\sf a}=\frac{1}{M(\beta)}{\rm min}\,(a,\frac{a'}{M(\beta+1)}) \quad\mbox{and}\quad {\sf b}
=-\beta-(M(\beta)+2)\log_p (M(\beta)).\leqno(5) $$ We want to choose $C>0$ such that ${\sf a}$ becomes large. To this end we set $$ C=C(\beta)=\beta M(\beta)+\frac{1}{M(\beta+1)} $$ and obtain $$ {\sf a}=\frac{1}{(1+\beta M(\beta+1)M(\beta))M(\beta)}.\leqno(6) $$ Since with the above choice of $C(\beta)$ the numbers $a'/M(\beta+1)$, $a$ and $b$ are decreasing in $\beta$ the assumptions of (3.7) Proposition are satisfied. We denote by ${\cal E}(\lambda)^\beta={\cal E}({\bf H}_\lambda^\alpha)$ the set of all characters $\Theta:\,{\cal H}\rightarrow\bar{\Bbb Q}_p$ which are defined over ${\cal O}$ and such that the corresponding eigenspace $H^d(\Gamma\backslash X,{\cal L}_\lambda)^\beta(\Theta)$ w.r.t the normalized action of ${\cal H}$ does not vanish. We then obtain from (3.7) the following
{\bf (7.7) Corollary. }{\it Let $\beta\in{\Bbb Q}_{\ge 0}$.
1.) ${\rm dim}\, H^d(\Gamma\backslash\bar{X},{\cal L}_\lambda)^\beta$ is locally constant, i.e. there is $D=D(\beta)$ only depending on $\beta$ such that $\lambda\equiv\lambda'\pmod{(p-1)p^D X({\bf T})}$ implies $$ {\rm dim}\, H^d(\Gamma\backslash\bar{X},{\cal L}_\lambda)^\beta={\rm dim}\, H^d(\Gamma\backslash\bar{X},{\cal L}_{\lambda'})^\beta $$
2.) Any $\Theta\in{\cal E}(\lambda_0)^{\beta}$ fits in a p-adic continuous family of eigencharacters of slope $\beta$, i.e. there are $\Theta_\lambda\in{\cal E}(\lambda)^{\beta}$ such that $\Theta_{\lambda_0}=\Theta$ and $\lambda\equiv\lambda'\pmod{(p-1)p^m X({\bf T})}$ implies $$ \Theta_\lambda\equiv\Theta_{\lambda'}\pmod{p^{{\sf a}(m+1)+{\sf b}}}; $$ here, ${\sf a}$ and ${\sf b}$ as in equation (5) and (6) and $\lambda$ runs over all dominant characters satisfying $\langle\lambda,\alpha\rangle>2\beta M(\beta)+2$ for all simple roots $\alpha$.
}
{\it Remark. } The congruence between two eigencharacters $\Theta_\lambda$ and $\Theta_{\lambda'}$ is non trivial only if ${\sf a}(m+1)+{\sf b}>0$, i.e. only if $$ m+1>E(\beta)=(-b+M(\beta)(M(\beta)+2)\log_p M(\beta))(1+\beta M(\beta+1)M(\beta)). $$
Thus, only the existence of the family $(\Theta_\lambda)$ with $\lambda\equiv \lambda_0\pmod{p^{E(\beta)}}$ is a non trivial statement.
Note that $M(\beta)\rightarrow\infty$ if $\beta\rightarrow\infty$ and that we may assume $M(\beta)\le M(\beta+1)$; these two statements imply that $E(\beta)\sim M(\beta+1)^4\log_p M(\beta+1)$ for $\beta\rightarrow \infty$.
{\bf \Large References}
[A-S] Ash, A., Stevens, G., $p$-adic Deformations of arithmetic cohomology, preprint 2008
[B] Bewersdorff, J., Eine Lefschetzsche Fixpunktformel f{\"u}r Hecke Operatoren, Bonner Mathematische Schriften {\bf 164} 1985
[H 1] Hida, H., Iwasawa modules attached to congruences of cusp forms, Ann. Sci. Ecole Norm. Sup. {\bf 19} (1986) 231 - 273
[H 2] -, On $p$-adic Hecke algebras for ${\rm GL}_2$ over totally real fields, Ann. of. Math. {\bf 128} (1988) 295 - 384
[K] Koike, M., On some $p$-adic properties of the Eichler Selberg trace formula I, II, Nagoya Math. J. {\bf 56} (1974), 45 - 52, {\bf 64} (1976), 87 - 96
[M] Miyake, T., Modular forms, Springer 1989
[Ma 1] Mahnkopf, J., On Truncation of irreducible representations of Chevalley groups, J. of Number Theory, 2013
[Ma 2] -, $P$-adic families of modular forms, in: Oberwolfach Reports, 2011
[Ma 3] -, On $p$-adic families of modular forms, in: Proceedings of RIMS Conference on ``Automorphic forms, automorphic representations and related topics", 93 - 108, Kyoto University, 2010
\end{document}
|
arXiv
|
James reduced product
In topology, a branch of mathematics, the James reduced product or James construction J(X) of a topological space X with given basepoint e is the quotient of the disjoint union of all powers X, X2, X3, ... obtained by identifying points (x1,...,xk−1,e,xk+1,...,xn) with (x1,...,xk−1, xk+1,...,xn). In other words, its underlying set is the free monoid generated by X (with unit e). It was introduced by Ioan James (1955).
For a connected CW complex X, the James reduced product J(X) has the same homotopy type as ΩΣX, the loop space of the suspension of X.
The commutative analogue of the James reduced product is called the infinite symmetric product.
References
• James, I. M. (1955), "Reduced product spaces", Annals of Mathematics, Second Series, 62: 170–197, doi:10.2307/2007107, ISSN 0003-486X, JSTOR 2007107, MR 0073181
|
Wikipedia
|
\begin{document}
\title{Exceptional cosmetic surgeries on homology spheres} \author{Huygens C. Ravelomanana} \maketitle \date{}
\begin{abstract} The cosmetic surgery conjecture is a longstanding conjecture in 3-manifold theory. We present a theorem about exceptional cosmetic surgery for homology spheres. Along the way we prove that if the surgery is not a small seifert $\Z/2\Z$-homology sphere or a toroidal irreducible non-Seifert surgery then there is at most one pair of exceptional truly cosmetic slope. We also prove that toroidal truly cosmetic surgeries on integer homology spheres must be integer homology spheres. \end{abstract} \maketitle \textbf{Keyword:} Dehn surgeries, cosmetic surgeries, hyperbolic knots, exceptional surgeries.
\section{introduction} In \cite{RavelomananaS3-paper} we proved that for hyperbolic knots in $S^3$ the slope of \textit{exceptional truly cosmetic surgeries} must be $\pm 1$ and that the surgery must be irreducible toroidal and not Seifert fibred. As a consequence we showed that there are no truly cosmetic surgeries on alternating and arborescent knots in $S^3$. Here we study the problem for the case of integer homology spheres in general. Recall that a surgery on a hyperbolic knot is \textit{exceptional} if it is not hyperbolic and that two surgeries on the same knot but with different slopes are called \textit{cosmetic} if they are homeomorphic and are called \textit{truly cosmetic} if the homeomorphism preserves orientation. The cosmetic surgery Conjecture \cite[Conjecture (A) in problem 1.81 ]{Kirby-list} states that if the knot complement is boundary irreducible and irreducible then two surgeries on inequivalent slopes are never truly cosmetic. For more details about cosmetic surgeries we refer to \cite{Matignon-hyp-knot},\cite{Weeks}, \cite{Matignon-knot}, \cite{Matignon} and \cite{RavelomananaS3-paper}.
In this paper we study truly cosmetic surgeries along hyperbolic knots in homology spheres. We are concerned with the case where the two slopes are both exceptional slopes. We call such surgeries \textit{exceptional truly cosmetic surgeries}. If $K$ is a knot in an integer homology sphere $Y$, we denote $ \nb( K)$ a regular neighbourhood of $K$, $Y_K:= Y \setminus \text{int}(\nb(K))$ the exterior of $K$ and $Y_K(r)$ Dehn surgery along $K$ with slope $r$. When the manifold is a homology sphere we identify $r$ with a rational number according to the standard meridian longitude basis where the longitude is the preferred longitude. The main result of this paper is the following.
\begin{thm}\label{my-proposition-2} Let $K$ be a hyperbolic knot in a homology sphere $Y$. Let $0<p$ and $q<q'$ be integers. If $Y_K(p/q)$ is homeomorphic to $Y_K(p/q')$ as oriented manifolds, then $Y_K(p/q)$ is either \begin{enumerate} \item a reducible manifold in which case $p=1$ and $q'=q+1$, \item a toroidal Seifert fibred manifold in which case $p=1$ and $q'=q+1$, \item a small Seifert manifold with infinite fundamental group in which case either \begin{itemize}
\item[•] $p=1$ and $|q-q'|\leq 8$. \item[•] or $p=5, \ q'=q+1$ and $q \equiv 2 \left[\text{mod}\ 5\right] $. \item[•] or $p=2, \ \text{and} \ q'=q+2$ or $q'=q+4$. \end{itemize}
\item a toroidal irreducible non-Seifert fibred manifold in which case $p=1$ and $|q'-q| \leq 3$. \end{enumerate} \end{thm}
The following two corollaries are straigtforward consequences of the theorem.
\begin{cor}\label{corollary-toroidal-give-integer} Toroidal truly cosmetic surgeries along hyperbolic knots in integer homology spheres yield integer homology spheres. \end{cor}
\begin{cor}\label{corollary-at-most-1-pair} For a hyperbolic knot in an homology sphere there is at most one pair of exceptional truly cosmetic slope which does not yield a $\Z/2\Z$-homology small Seifert surgery or a toroidal irreducible non-Seifert surgery. \end{cor}
\paragraph*{Notations.} If a torus $T$ is a component of $\partial M$ we denote $M(s,T)$ the Dehn filling of $M$ with slope $s$ along $T$, if $\partial M$ has only one torus component we will simply write $M(s)$. In the case of surgery along a knot $K$ is a 3-manifold $Y$ we use the notation $Y_K(s)$ defined earlier.
\textit{Rational longitude.} Let $K$ be a knot in a rational homology 3-sphere $Y$. The knot $K$ has finite order in $H_1(Y,\Z)$ so there is an integer $n$ and a surface $\Sigma \subset Y$ such that $nK=\partial \Sigma$. The intersection of $\Sigma$ with $\partial \nb (K)$ is $n$-parallel copies of a curve $\lambda_M$. The isotopy class in $\partial \nb (K)$ of this curve does not depend on the choice of the surface $\Sigma$. We call the corresponding slope the rational longitude of $K$ and denote it by $\lambda_M$.
We will need the following lemma from \cite{Watson-PhD}. \begin{lem} \cite{Watson-PhD} \label{watson-lemma} Let $s$ be a slope on $ \partial Y_K$. There is a constant $c_M$ such that
$$|H_1(Y_K(s);\Z)|= c_M\ \Delta(s,\lambda_M).$$ \end{lem} Here $\Delta(r,s)$ stands for the distance between two slopes $r$ and $s$ i.e their minimal geometric intersection.
\subsection*{Acknowledgment.} The author would like to thank Steven Boyer for helpful discussions and comments.
\section{Proof of Theorem~\ref{my-proposition-2}} \label{Chapter: Exceptional Cosmetic Surgery on Integer Homology Sphere}
Let us first recall a result of Boyer and Lines about the second derivative of the Alexander polynomial $\Delta''_K$ of a knot $K$.
\begin{pro}\cite{Boyer-Lines}\label{my-proposition-1} Let $K$ be a non-trivial knot in a homology 3-sphere $Y$ and let $r$ and $s$ be two distinct slopes. If there is an orientation preserving homeomorphism between $Y_K(r)$ and $Y_K(s)$ then $\Delta''_K(1) = 0$. \end{pro} A consequence of this is the following lemma. \begin{lem}\label{lemma-dedekind}
Let $K$ be a non-trivial knot in a homology 3-sphere $Y$. If there is an orientation preserving homeomorphism between $Y_K(p/q)$ and $Y_K(p/q')$ then $s(q,p)=s(q',p)$, where$$s(q,p):=\text{sign}(p)\cdot \sum^{|p|-1}_{k=1}((\frac{k}{p}))((\frac{kq}{p})),$$ with $$ ((x))=\begin{cases} x-[x]-\frac{1}{2}, & \text{if $x \notin \Z$}, \\ 0, & \text{if $x \in \Z$}. \end{cases}$$ \end{lem} \begin{proof} By using the surgery formula\footnote{we have a difference of sign here due to our convention for lens spaces} \cite[Corollary 4.5]{Saveliev} for the Casson invariant $\lambda$, an invariant for oriented rational homology sphere, we have $$\lambda(Y)+\lambda(L(p,q))+\frac{q}{2p}\Delta_K''(1)=\lambda(Y_K(p/q))=\lambda(Y_K(p/q'))=\lambda(Y)+\lambda(L(p,q'))+\frac{q'}{2p}\Delta_K''(1)$$ but $\Delta''_K(1) = 0$ by Proposition~\ref{my-proposition-1} so $\lambda(L(p,q))=\lambda(L(p,q'))$. On the other hand Boyer and Lines in \cite{Boyer-Lines} have computed the Casson invariant for a lens space $L(p,q)$ to be
$$\lambda(L(p,q))=-\frac{1}{2}s(q,p).$$ \end{proof}
Let $Y$ be an integer homology sphere and let $K$ be a knot in $Y$. Assume $Y_K(p/q)\cong + Y_K(p/q')$ then we have an induced isomorphism on homology and between the linking pairing of $Y_K(p/q)$ and $Y_K(p/q')$. More precisely let $\left[\mu \right]_q $ (resp. $\left[\mu \right]_{q'} $) be the meridian generator of $H_1(Y_K(p/q))$ (resp. $H_1(Y_K(p/q'))$), then we can find a unit $ u \in \left(\Z/p\Z\right)^*$ such that
$$f_*\left[\mu \right]_q =\left[\mu \right]_{q'} u.$$
On the other hand if we denote $lk_q$ (resp. $lk_{q'}$) the linking pairing of $Y_K(p/q)$ and $Y_K(p/q')$ respectively and $f_*$ the map induced on homology then we can check that $$lk_q(\left[\mu \right]_q,\left[\mu \right]_q)=-q/p \ \ \left[\text{mod}\ \Z\right].$$ The isomorphism $f_*$ then gives $$ \begin{aligned} lk_q(\left[\mu \right]_q,\left[\mu \right]_q) & =lk_q(f_*\left[\mu \right]_q,f_*\left[\mu \right]_q) \ \ \ \left[\text{mod}\ \Z\right]\\ &=lk_q(\left[\mu \right]_{q'},\left[\mu \right]_{q'}) u^2\ \ \ \left[\text{mod}\ \Z\right] \end{aligned}$$ and it follows that \begin{equation}\label{congruence} -\frac{q}{p} \equiv -\frac{q'}{p} u^2 \ \ \ \left[\text{mod}\ \Z\right], \ \ \text{i.e} \ \ \ q \equiv q' u^2 \ \ \ \left[\text{mod}\ p\right]. \end{equation}
As a consequence we have the following lemma.
\begin{lem}\label{my-lemma-2} Let $K$ be a hyperbolic knot in a $\Z$-homology sphere $Y$. Let $p/q$ and $p/q'$ be exceptional slopes such that $0<p$ and $q < q'$. If there is an orientation preserving homeomorphism between $Y_K(p/q)$ and $Y_K(p/q')$ then one of the following holds:
\begin{itemize}
\item[(a)] $p=1$ and $|q-q'|\leq 8$.
\item[(b)] $p=5, \ q'=q+1$ and $q \equiv 2 \left[\text{mod}\ 5\right] $. \item[(c)] $p=2, \ \text{and} \ q'=q+2$ or $q'=q+4$. \end{itemize}
\end{lem}
\begin{proof}
Since the slopes are exceptional by \cite[Theorem 1.2]{Lackenby-Meyerhoff} $\Delta(p/q,p/q')=|pq'-qp|=p |q-q'| \leq 8$ so $p \leq 8$. If $p \in \left\{8,7,6,5\right\}$ then $|q-q'|\leq 1$ and $q'=q+1$. Since one of $q$ and $q+1$ is even and $p,q$ (resp. $p,q'$) are relatively prime $p$ cannot be $6$ or $8$. Similarly if $p\in \left\{4,3\right\}$ then $|q-q'|\leq 2$ and $q'\in \left\{q+1,q+2\right\}$. If $p=2$ then $|q-q'|\leq 4$ and $q'\in \left\{q+1,q+2,q+3,q+4\right\}$ but we must have $q\equiv q'$ $[\text{mod}\ 2]$ by Equation~\ref{congruence} so $q'\in \left\{q+2,q+4\right\}$.
We use the same equation for $p\in \left\{7,5,4,3\right\}$ to obtain the result. \begin{itemize}
\item[•] Case $p=7$. The squares modulo $7$ are $1$, $2$ and $4$, they are all units so
$$q \equiv q+1 \ \ \ \left[\text{mod}\ 7\right]\ \ \ \text{ or}\ \ \ \ q \equiv 2 (q+1) \ \ \ \left[\text{mod}\ 7\right]\ \ \ \text{ or}\ \ \ \ q \equiv 4 (q+1) \ \ \ \left[\text{mod}\ 7\right].$$ The first equation is impossible and the last two are equivalent to
$$q \equiv 5 \ \ \ \left[\text{mod}\ 7\right]\ \ \ \text{ or}\ \ \ \ q \equiv 1 \ \ \ \left[\text{mod}\ 7\right].$$
By a straightforward computation $$s(5,7)=\frac{-1}{14}, \ \ \ s(6,7)=\frac{-5}{14}, \ \ \ s(1,7)=\frac{5}{14}, \ \ \ s(2,7)=\frac{1}{14}.$$ Using the fact that $s(a,p)=s(b,p)$ if $a \equiv b \ \ \ \left[\text{mod}\ p\right]$, we get
If $q \equiv 5 \ \ \ \left[\text{mod}\ 7\right]$, $$s(q,7)=s(5,7)=\frac{-1}{14}\neq \frac{-5}{14}=s(6,7)=s(q+1,7)$$ If $q \equiv 1 \ \ \ \left[\text{mod}\ 7\right]$, $$s(q,7)=s(1,7)=\frac{5}{14}\neq \frac{1}{14}=s(2,7)=s(q+1,7)$$ This contradicts Lemma \ref{lemma-dedekind} which says that we must have $s(q,p)=s(q',p)$. \item[•] Case $p=5$. The squares modulo $5$ are $1$ and $4$, the only unit among them is $1$, therefore
$$q \equiv q+1 \ \ \ \left[\text{mod}\ 5\right]\ \ \ \text{ or}\ \ \ \ q \equiv 4(q+1) \ \ \ \left[\text{mod}\ 5\right].$$ The first equation has no solution and the second is equivalent to $$q \equiv 2 \ \ \ \left[\text{mod}\ 5\right].$$
\item[•] Case $p=4$. We have $q'\in \{q+1,q+2\}$ and the only square modulo $4$ is $1$ therefore
$$q \equiv q+1 \ \ \ \left[\text{mod}\ 4\right]\ \ \ \text{ or}\ \ \ \ q \equiv q+ 2 \ \ \ \left[\text{mod}\ 4\right].$$ These equations have no solutions so the case $p=4$ is not possible.
\item[•] Case $p=3$. We have $q'\in \{q+1,q+2\}$ and the only square modulo $3$ is $1$, therefore this case is also impossible.
\end{itemize} \end{proof}
The next two theorems of Gordon and Wu about toroidal surgery from \cite{toroidal} and \cite{punctured-tori} will play a key part in the proof of Theorem~\ref{my-proposition-2}. The first theorem is about pairs of toroidal slopes at distance $4$ or $5$ apart and the second theorem is for distance greater than $5$. \begin{thm}\cite{toroidal}\label{toroidal:Theorem 1.1} There exist fourteen hyperbolic $3$-manifolds $M_i$, $1 \leq i \leq 14$, such that \begin{enumerate} \item $\partial M_i$ consists of two tori $T_0, T_1$ if $i \in \{1,2,3,14\}$, and a single torus $T_0$ otherwise;
\item there are slopes $r_i, s_i$ on $T_0$ such that $M_i(r_i,T_0)$ and $M_i(s_i,T_0)$ are toroidal, \begin{itemize} \item $\Delta(r_i,s_i) = 4$ if $i \in \{1,2,4,6,9,13,14\}$, and \item $\Delta(r_i,s_i) = 5$ if $i \in \{3,5,7,8,10,11,12\}$; \end{itemize}
\item if $M$ is a hyperbolic 3-manifold with toroidal Dehn fillings $M(r), M(s)$ where $\Delta(r,s) = 4$ or $5$, then $(M,r,s)$ is equivalent either to $(M_i,r_i,s_i)$ for some $1\leq i \leq 14$, or to $(M_i(t,T_1),r_i,s_i)$ where $i \in \{1,2,3,14\}$ and $t$ is a slope on the boundary component $T_1$ of $M_i$.
\end{enumerate} Here we define two triples $(N_1, r_1, s_1)$ and $(N_2, r_2, s_2)$ to be {\it equivalent} if there is a homeomorphism from $N_1$ to $N_2$ which sends the boundary slopes $(r_1, s_1)$ to $(r_2, s_2)$ or $(s_2, r_2)$. \end{thm}
Let $W$ be the exterior of the Whitehead link and let $T_0$ be a boundary component of $W$. Choosing a standard meridian-longitude basis $\mu,\lambda$ for $H_1(T_0)$ we can identify slopes $T_0$ with elements of $\Q\cup \{1/0\}$. The manifolds $W(1),\ W(2),\ W(-5),$ $ W(-5/2)$ are hyperbolic and they all admits a pair of toroidal slopes $r,s$ with $\Delta(r,s)>5$. Gordon proved that these examples are the only possibilities for hyperbolic manifolds with pair of toroidal slopes at distance $>5$.
\begin{thm}\cite{punctured-tori}\label{theorem-Gordon-punctured-tori} Let $M$ be an irreducible 3-manifold and $T$ a torus component of $\partial M$. If two slopes $r$ and $s$ on $T$ are toroidal then either \begin{enumerate} \item $\Delta(r,s)\leq 5$; or \item $\Delta(r,s) =6 $ and $M$ is homeomorphic to $W(2)$; or \item $\Delta(r,s) =7 $ and $M$ is homeomorphic to $W(-5/2)$; or \item $\Delta(r,s) =8 $ and $M$ is homeomorphic to $W(1)$ or $W(-5)$. \end{enumerate} \end{thm}
We can now get more restrictions on the slopes which gives cosmetic toroidal fillings. \begin{lem}\label{large-distance-toroidal} Let $K$ be a knot in a $\Z$-homology sphere $Y$. Let $r,s$ be two slopes on $\partial Y_K$. If $Y_K(r)$ and $Y_K(s)$ are toroidal and if there is an orientation preserving homeomorphism between them, then $\Delta(r,s) \leq 3$. \end{lem}
\begin{proof}
We will distinguish the cases $\Delta(r,s) > 5$ and $\Delta(r,s) = 5$, or $4$.
Let $W$ be the Whitehead link exterior. By Theorem \ref{theorem-Gordon-punctured-tori} if $\Delta(r,s) > 5$ then either \begin{itemize} \item[•] $\Delta(r,s)=6$ and $Y_K$ is homeomorphic to $W(2)$ \item[•] $\Delta(r,s)=7$ and $Y_K$ is homeomorphic to $W(-5/2)$ \item[•] $\Delta(r,s)=8$ and $Y_K$ is homeomorphic to $W(1)$ or $W(-5)$ \end{itemize} The manifold $Y_K(r)$ is then obtained by surgery on the Whitehead link with coefficients $\{2,a_1/b_1\}$ or $\{-5/2,a_2/b_2\}$ or $\{1,a_3/b_3\}$ or $\{-5,a_4/b_4\}$. We can then compute the order of the first homology using this coefficient, $$
|H_1(Y_K(r))|=\begin{vmatrix} 2& b_1\ \text{lk}(K_1,K_2)\\ \text{lk}(K_2,K_1)& a_1 \end{vmatrix}=\begin{vmatrix} 2& 0\\ 0& a_1 \end{vmatrix} =2a_1 $$ where $K_1,K_2$ denotes the two components of the Whitehead link and $\text{lk}(K_1,K_2)$ their linking number. Similarly we get for the other possibilities $$
|H_1(Y_K(r))|=\begin{vmatrix} -5& 0\\ 2& a_2 \end{vmatrix} =-5a_2, \ \ \ \text{or} \ \ \
|H_1(Y_K(r))|=\begin{vmatrix} -5& 0\\ 0& a_4 \end{vmatrix} =-5a_4. $$
On the other hand if $\Delta(r,s) > 5$ then $Y_K(r)$ must be a homology sphere by Lemma \ref{my-lemma-2}. Therefore these three possibilities cannot occur. The remaining case is $Y_K(r)=W(1)$ which is the figure-8 complement and there are no truly cosmetic surgeries along the figure-8 knot, as we can check for instance that $\Delta_{\text{figure-8}}''(1)\neq 0$ and use Proposition~\ref{my-proposition-1}.
Now we can assume $\Delta(r,s) \in \{4, 5\}$. We will do a case by case study using Theorem \ref{toroidal:Theorem 1.1}.
\begin{itemize} \item[•] Case 1. $Y_K$ is one of $M_1,M_2,M_3$.
The manifolds $M_1,M_2,M_3$ are the exterior of the following links \cite{toroidal}
For $i\in \{1,2,3\}$ we will denote by $K'_i$ and $K''_i$ the leftmost and rightmost components of the above links. From \cite{toroidal} we know that the component $T_1$ is the boundary of $\nb(K'_i)$. Let $t=a/b$ be a slope on $\nb(K'_i)$ written in its Seifert framing. If $a\neq 1$ then $H_1\left(M_i(t)\right)=\Z \oplus \Z/a\Z \neq \Z = H_1(Y_K)$. Therefore we must have $a=1$. But in this case we have a knot complement in $S^3$ and we know from \cite[Theorem 1.4]{RavelomananaS3-paper} that the surgery must be $\pm 1$ therefore $\Delta(r,s)=2$ which is not the case here.
\item[•] Case 2. $Y_K \cong M_{14}$. Let $t$ be a slope on the boundary component $T_1$ of $M_{14}$, and let $K_t$ be the core of the Dehn filling solid torus in $M_{14}(t,T_1)$. From \cite[Theorem 22.3]{toroidal} we can compute $$H_1\left(M_{14}(t,T_1)\right)/H_1(K_t)= \Z/2\Z \oplus \Z/2\Z.$$ Therefore $H_1\left(M_{14}(t)\right)\neq \Z = H_1(Y_K)$.
\item[•] Case 3. $Y_K$ is one of $M_4$ and $M_5$. From \cite[Theorem 22.3]{toroidal} we can also compute $$H_1(M_4(r))=\Z, \ \ \text{ and }\ \ H_1(M_5(r))=\Z \oplus \Z/4\Z.$$ These situations are not possible since $Y_K(r)$ is a rational homology sphere.
\item[•] Case 4. $Y_K$ is one of $M_6,M_7,M_{10},M_{11},M_{12}, M_{13}$. By \cite[Lemma 22.2]{toroidal} $Y_K$ admits a Lens space surgery. With respect to the framing in \cite{toroidal} these lens space surgeries are $$
\begin{array}{lll}
M_6(\infty) = L(9,2) \quad & M_7(\infty) = L(20,9) \quad & M_{10}(\infty) = L(14,3) \\
M_{11}(\infty) = L(24,5) \quad & M_{12}(\infty) = L(3,1) \quad & M_{13}(\infty) = L(4,1) \end{array} $$
From this we can deduce that $|\text{Tor}\left(H_1(Y_K)\right)| \neq 1$ which is not possible since $H_1(Y_K) =\Z$. \item[•] Case 5. $Y_K$ is one of $M_8$ and $M_9$. From \cite[Lemma 22.2]{toroidal} the manifolds $M_8$ and $M_9$ has two toroidal surgeries and one lens space surgery listed as follows with respect to the framing used in \cite{toroidal} $$
\begin{array}{lll}
M_8(0), \quad & M_8(-5/4),
\quad & M_8(-1) = L(4,1) \\
M_9(0), \quad & M_9(-4/3), \quad & M_9(-1) = L(8,3) \end{array} $$
For $i \in \{8,9\}$ let $a=|\text{Tor}\left(H_1(M_i)\right)|$ and $l$ be the order of the preferred rational longitude $\lambda_{M_i}$. We are going to express the framing used in \cite{toroidal} according to our standard basis $\{\mu, \lambda_{M_i}\}$. Let $\lambda$ be the framing used in \cite{toroidal}. Then the $-1$ slope in this framing can be written $-\mu + (p\mu+\lambda_{M_i})=(p-1)\mu +\lambda_{M_i}$.
Using the fact that $|H_1\left(L(4,1)\right)|=4$ and $|H_1\left(L(8,3)\right)|=4$, with Lemma~\ref{watson-lemma} we get
$$|H_1\left(M_8(-1)\right)|=4=\Delta\left((p-1)\mu +\lambda_{M_8}\ ;\ \lambda_{M_8}\right)\ la=|p-1| la,$$
$$|H_1\left(M_9(-1)\right)|=8=\Delta\left((p-1)\mu +\lambda_{M_9}\ ;\ \lambda_{M_9}\right)\ la=|p-1| la.$$ Since $Y_K$ is a knot complement in an integer homology sphere, if $Y_K$ is one of $M_8$ or $M_9$ then we must have $l=a=1$. Therefore $p\in \{-3,5\}$ for $M_8$ and $p\in \{9,-7\}$ for $M_9$. We can then deduce
$$
\begin{array}{ll}
H_1(M_8(0))= \Z/5\Z \ \ \text{or} \ \Z/3\Z, \quad & H_1(M_8(-5/4))= \Z/15\Z \ \ \text{or} \ \Z/17\Z, \\
H_1(M_9(0))=\Z/9\Z\ \ \text{or} \ \Z/7\Z, \quad & H_1(M_9(-4/3))=\Z/23\Z \ \ \text{or} \ \Z/25\Z, \end{array} $$ Therefore $M_8(0)$ and $M_8(-5/4)$ are not homeomorphic and the same is true for $M_9(0)$ and $M_8(-4/3)$. We can conclude that $Y_K$ cannot be one of $M_8$ or $M_9$. \end{itemize} \end{proof}
The last lemma implies in particular that toroidal truly cosmetic surgeries on integer homology sphere must be integer homology spheres.
The next preliminary result addresses the case of Seifert fibred toroidal surgeries. Before going into it we need a bit of $PSL_2(\C)$-character variety theory. We refer to \cite{Boyer-Zhang} for more details about character variety theory.
Let $X(G)$ denote the $PSL_2(\C)$-character variety of a finitely generated group $G$. When $G=\pi_1(Z)$ where $Z$ is a path-connected space, we shall write $X(Z)$ for $X(\pi_1(Z))$. Recall that $X(G)$ is a complex algebraic variety and a surjective homomorphism $G\twoheadrightarrow H$ induces an injective morphism $X(H) \hookrightarrow X(G)$ by precomposition. A curve $X_0 \subset X(G)$ is called non-trivial if it contains the character of an irreducible representation. Each $\gamma \in X(G)$ determines an element $f_{\gamma}$ of the coordinate ring $\C[X(G)]$ where if $\rho: G \to PSL_2(\C)$ is a representation and $\chi_{\rho}$ the associated point in $X(G)$, then $f_{\gamma}(\chi_{\rho})=\text{trace}(\rho(\gamma))^2-4$.
When $G=\pi_1(M)$, any slope $r$ on $\partial M$ determines an element of $\pi_1(M)$, well-defined up to conjugation and taking inverse. Hence it induces a well-defined element $f_r \in \C[X(M)]$.
\begin{lem}\label{my-lemma-3} Let $K$ be a hyperbolic knot in an integer homology sphere $Y$. Let $r=p/q$ and $r'=p/q'$ be exceptional slopes such that $0<p$ and $q < q'$. If $Y_K(r)$ is homeomorphic to $Y_K(r')$ as oriented manifolds and is Seifert fibred and toroidal, then $p=1$ and $q'=q+1$. \end{lem} \begin{proof}
Let $\mathcal{B}$ be the base orbifold for $Y_K(r)$. Since $Y_K(r)$ is toroidal with finite first homology $\mathcal{B}$ cannot be spherical. Moreover it cannot be a sphere with strictly less than $4$ cone points. Thus $\mathcal{B}$ must be either hyperbolic or one among: $S^2(2,2,2,2)$, $\mathbb{T}^2$, $\RP^2(2,2)$ or the Klein bottle.
Since we assume that $Y_K(r)$ and $Y_K(r')$ are toroidal, by lemma~\ref{large-distance-toroidal} $\Delta(r,r') \leq 3$, so $p\leq 3$. Lemma \ref{my-lemma-2} then implies that $p=1$ or $p=2$. If $p=2$ then $q'=q+2$ or $q'=q+4$ and it follows that $\Delta(r,r')=4$ or $8$, but this contradicts the fact that $\Delta(r,r') \leq 3$. Therefore we must have $p=1$. Furthermore using the fact that $Y_K(r)$ is a Seifert fibred manifold, we have the following surjection in first homology: $H_1(Y_K(r)) \twoheadrightarrow H_1(\mathcal{B})$,
thus $|H_1(Y_K(r))|=p= 1 \geq |H_1(\mathcal{B})|$. However we know that $|H_1(S^2(2,2,2,2))|=|H_1(\RP^2(2,2))|=8$, $H_1(\mathbb{T}^2)=\Z\oplus \Z$ and $H_1(\text{Klein bottle})=\Z\oplus\Z/2\Z$. Thus $\mathcal{B}$ must be hyperbolic.
By the same argument as above $\mathcal{B}$ cannot be $\RP^2(a,b)$ since $|H_1(\RP^2(a,b))|=2ab > 1$.
By work of Thurston \cite{Thurston}, since $\mathcal{B} \neq \RP^2(a,b)$ the real dimension of the Teichm\"uller space $\mathcal{T}(\mathcal{B})$ of $\mathcal{B}$ is at least $2$. Moreover $\mathcal{T}\subset X(\pi_1^{orb}(\mathcal{B}))$ where $\pi_1^{orb}(\mathcal{B})$ is the orbifold fundamental group of $\mathcal{B}$. On the other hand we have $$\pi_1(M)\twoheadrightarrow \pi_1(M(r)) \twoheadrightarrow \pi_1^{orb}(\mathcal{B})$$ which induces a sequence of inclusions $$X(Y_K) \supset X(Y_K(r)) \supset X(\pi_1^{orb}(\mathcal{B})) \supset \mathcal{T}(\mathcal{B}).$$ Therefore the complex dimension of $X(Y_K(r))$ is at least $1$. We want to prove that it contains a subvariety of complex dimension at least $2$. Assume on the contrary that all components of $X(Y_K(r))$ have complex dimension $1$. In this case $\mathcal{T}(\mathcal{B})$
would be an open set in a non-trivial curve $X_0 \subset X(Y_K(r))$. When $\chi_{\rho} \in \mathcal{T}(\mathcal{B})$, $\rho$ is the holonomy of a hyperbolic structure on $\mathcal{B}$ and it is well known that if $\gamma \in \pi_1^{orb}(\mathcal{B})$ has infinite order, then $f_{\gamma}(\chi_{\rho})$ is a real number.
Deforming $\chi_{\rho}$ in $\mathcal{T}(\mathcal{B})$ shows that $f_{\gamma}|_{X_0}$
is non-constant and must take some non-real values. This contradicts the fact that it is real-valued on the open subset $\mathcal{T}(\mathcal{B})\subset X_0$. Thus $X(Y_K)$ has a subvariety of complex dimension $2$ or larger on which $f_r$ is constant and which contains the character of an irreducible representation. Hence if $r'\neq r$ is any other slope, we can then construct a non-trivial curve $X_0 \subset X(Y_K)$ on which both $f_r$ and $f_{r'}$ are constant. Indeed let $X$ be this two dimensional subvariety, if $f_{r'}|_{X}$ is constant then we are done, otherwise we can take a regular value $z_0\in \C$ of $f_{r'}|_{X}$, the preimage $f_{r'}|_{X}^{-1}(z_0)$ is a codimension one subvariety of $X$ and we can take $X_0=f_{r'}|_{X}^{-1}(z_0)$. It follows that $f_r|_{X_0}$ is constant for each slope. In particular for each ideal point $\tilde{x}$ of $X_0$ and slope $s\in \partial Y_K$, $\tilde{f}_s(\tilde{x})\in \C$. Now \cite[Proposition 4.10 \& Claim of page 786]{Boyer-Zhang} imply that there is a closed essential surface $S\subset Y_K$ which compresses in $Y_K(r)$ but stays incompressible in $Y_K(s)$ if $\Delta(s,r) >1$.
Suppose we have $\Delta(r,r') \geq 2 $, then $S$ must be incompressible in $Y_K(r')$. Since $Y_K$ is hyperbolic it has no incompressible torus. Therefore $S$ must have genus at least $2$ and is a horizontal surface.
On the other hand $Y_K\subset Y$ and $Y$ is a $\Z$-homology sphere so $S$ must separate $Y_K$ and also $Y_K(r')$. Indeed $H_2\left(Y\right)=0$ so $\left[S\right]=0$ and $S$ separates.
Let $M_1$ and $M_2$ be the two components of $Y_K(r') \setminus S$. They are both interval semi-bundles with base $\mathcal{B}$. It follows that if $\Sigma_i$ is the core surface of $M_i$, then $\pi_1(\Sigma_i) \cong \pi_1(M_i) $ for $i=1,2$. On the other hand since $\partial M_i =S$ is connected, we have a $2$ to $1$ connected cover $\partial M_i \to \Sigma_i$. Then $\pi_1(S)$ is an index two subgroup of $\pi_1(\Sigma_i)$, in particular it is normal. Using Van-Kampen theorem we have $$\pi_1(Y_K(r'))=\pi_1(\Sigma_1) \ast_{\pi_1(S)} \pi_1(\Sigma_2)$$
and $\pi_1(S)$ is normal in $\pi_1(Y_K(r'))$ since it is normal in both component of the amalgam. Hence
$$\frac{\pi_1(Y_K(r'))}{\pi_1(S)}=\left(\frac{\pi_1(\Sigma_1)}{\pi_1(S)}\right) \ast \left(\frac{\pi_1(\Sigma_2)}{\pi_1(S)}\right)\cong \Z/2Z \ast \Z/2\Z$$
and we have a surjection $\pi_1(Y_K(r')) \twoheadrightarrow \Z/2\Z \ast \Z/2\Z$. This induces a surjection in first homology $H_1(Y_K(r')) \twoheadrightarrow \Z/2\Z \oplus \Z/2\Z$, which contradicts the fact that $H_1(Y_K(r'))$ is cyclic. Therefore $\Delta(r,r')=p|q-q'| \leq 1$ which implies $p=1$ and $q'=q+1$. \end{proof}
We can now prove the main theorem.
\begin{thm} Let $K$ be a hyperbolic knot in a homology sphere $Y$. Let $0<p$ and $q<q'$ be integers. If $Y_K(p/q)$ is homeomorphic to $Y_K(p/q')$ as oriented manifolds, then $Y_K(p/q)$ is either \begin{enumerate} \item a reducible manifold in which case $p=1$ and $q'=q+1$, \item a toroidal Seifert fibred manifold in which case $p=1$ and $q'=q+1$, \item a small Seifert manifold with infinite fundamental group in which case either \begin{itemize}
\item[•] $p=1$ and $|q-q'|\leq 8$. \item[•] or $p=5, \ q'=q+1$ and $q \equiv 2 \left[\text{mod}\ 5\right] $. \item[•] or $p=2, \ \text{and} \ q'=q+2$ or $q'=q+4$. \end{itemize}
\item a toroidal irreducible non-Seifert fibred manifold in which case $p=1$ and $|q'-q| \leq 3$. \end{enumerate} \end{thm}
\begin{proof}
The manifold $Y_K(r)$ is either reducible, Seifert fibred or toroidal. If it is reducible then by \cite[Theorem 1.2]{Gordon-Luecke-3} $\Delta(p/q,p/q')= p|q-q'|=1$. If it is toroidal and Seifert fibred then we have (1) which is given by Lemma \ref{my-lemma-3}. The remaining possibilities are then (3), (4) and the case $\pi_1(Y_K(r))$ is finite. The proofs of (3) and (4) follow from Lemma~\ref{my-lemma-2}. We are now left with the last possibility. Assume that $\pi_1(Y_K(r))$ is finite. By \cite[Theorem 1.1]{finite-paper} the distance between two finite slopes is at most $3$, so $\Delta(p/q,p/q')=p|q'-q| \leq 3$. In particular $p\in \{1,2,3\}$, but by Lemma \ref{my-lemma-2}, $p\in \{1,2,5\}$ thus $p=1$ or $p=2$. If $p=2$ then $|q'-q| \geq 2$ by Lemma \ref{my-lemma-2} and $\Delta(p/q,p/q')=4 > 3$ therefore we can only have $p=1$. It follows that $Y_K(r)$ is a homology sphere with finite fundamental group which implies $Y_K(r)=\Sigma\left(2,3,5\right)$ or $Y_K(r)=S^3$. If $Y_K(r)=\Sigma\left(2,3,5\right)$ or $S^3$ then $Y_K\subset \Sigma\left(2,3,5\right)$ or $S^3$. Let $Z$ denote either $\Sigma\left(2,3,5\right)$ or $S^3$. Then $Y_K=Z \setminus \nb(K')$ where $K'$ is a non-trivial knot in $Z$ for which there is a non trivial slope which gives $\Sigma\left(2,3,5\right)$. We notice that both $\Sigma\left(2,3,5\right)$ and $S^3$ are L-space homology spheres so by \cite[Lemma 3.3]{Ravelomanana} $\Delta_K''(1)= 2\neq 0$. Therefore by Proposition~\ref{my-proposition-1} there is no orientation preserving homeomorphism between $Y_K(r)$ and $Y_K(r')$.
\end{proof}
\thanks{University of Georgia, Department of Mathematics, 606 Boyd GSRC, Athens GA, USA.\\ Email: [email protected]}
\end{document}
|
arXiv
|
Out-of-pocket healthcare expenditure in Australia: trends, inequalities and the impact on household living standards in a high-income country with a universal health care system
Emily J. Callander1Email authorView ORCID ID profile,
Haylee Fox1 and
Daniel Lindsay2
Received: 12 September 2018
Accepted: 1 March 2019
Poor health increases the likelihood of experiencing poverty by reducing a person's ability to work and imparting costs associated with receiving medical treatment. Universal health care is a means of protecting against the impoverishing impact of high healthcare costs. This study aims to document the recent trends in the amount paid by Australian households out-of-pocket for healthcare, identify any inequalities in the distribution of this expenditure, and to describe the impact that healthcare costs have on household living standards in a high-income country with a long established universal health care system. We undertook this analysis using a longitudinal, nationally representative dataset – the Household Income and Labour Dynamics in Australia Survey, using data collected annually from 2006 to 2014. Out of pocket payments covered those paid to health practitioners, for medication and in private health insurance premiums; catastrophic expenditure was defined as spending 10% or more of household income on healthcare.
Average total household expenditure on healthcare items remained relatively stable between 2006 and 2014 after adjusting for inflation, changing from $3133 to $3199. However, after adjusting for age, self-reported health status, and year, those in the lowest income group (decile one) had 15 times the odds (95% CI, 11.7–20.8) of having catastrophic health expenditure compared to those in the highest income group (decile ten). The percentage of people in income decile 2 and 3 who had catastrophic health expenditure also increased from 13% to 19% and 7% to 13% respectively.
Ongoing monitoring of out of pocket healthcare expenditure is an essential part of assessing health system performance, even in countries with universal health care.
Out-of-pocket expenditure
Household living standards
Poor health increases the likelihood of experiencing poverty by reducing a person's ability to work and imparting costs associated with receiving medical treatment. Those who develop a chronic disease have a higher chance of leaving the workforce [29], and as such see a decline in their income as they lose the wages associated with paid employment [25]. This chain of events has been observed internationally [1, 24, 28] – as health, being a key form of human capital, universally effects a person's ability to participate in employment [3]. Countries with a welfare system may provide an income safety net for those who are too ill to work, thus providing a [small] supplementary income stream in the form of transfer payments. None-the-less, multiple studies have shown that those who develop a chronic disease face an increased risk of falling into income poverty, even in High-Income Countries (HICs) with such welfare systems in place [4, 5, 7].
Poor health and the negative impact it can have on living standards is important for a number of reasons. Governments with welfare systems to support those who are too ill to work will see an increase in the number of transfer payments being made; the more people out of the labour force due to ill health reduces the revenue base from which governments can draw an income stream to finance these transfer payments; and from the individual perspective, declining income reduces the amount of disposable income available to finance access to healthcare. This illustrates the cross-portfolio issues associated with the health-living standards nexus; highlighting the far-reaching impacts that poor health can have on both the Government and individual's financial capacity.
Poor health not only adversely affects people's financial capacity due to withdrawal from the labour force; poor health can also affect financial capacity by increasing the amount of household expenditure on healthcare related items. Healthcare is more of a 'necessary' good, as opposed to a 'discretionary' good [17], with people often having little choice as to whether they access it or not. As such, increasing expenditure on healthcare has a similar effect to decreasing income: it reduces the amount of disposable income available to families to spend on other goods, such as food, education, transport and entertainment.
Universal health care means that all people have access to the health services they need without being exposed to financial hardship when doing so [8]. Poorer people within the population have the greatest need for health care as they are more likely to suffer from illness and disease [2]. Therefore, contributions should be based on ability to pay and health services should be allocated according to need, ensuring that high out-of-pocket healthcare costs are mitigated, and the associated impoverishing potential of poor health is reduced [8]. Australia has a universal health care system, Medicare, which was introduced in 1984. In response to the spiralling costs facing the Australian state to financing this system, ongoing health care reform has led to an interrogation of the amount being paid out-of-pocket by individuals [26]. Previous research in this area has looked at out-of-pocket expenditure at a single point in time [6, 35], or focused upon expenditure by a single sub-population [21, 32]. However, it has been noted that the basis of the Medicare system – to provide universal health care – is being undermined by increasing out-of-pocket costs [20].
Against this background, this study has three research questions:
What do Australians currently pay for household healthcare expenditure and how has this changed over time?
What proportion of individuals live in households that have 'catastrophic healthcare expenditure', and what is the distribution of catastrophic expenditure by income group?
How many additional people would be in income poverty when household income is adjusted for household healthcare expenditure?
The overall aim of this paper is to document the recent trends in the amount paid by Australians out of pocket for healthcare, identify any inequalities in the distribution of this expenditure, and to describe the impact that household healthcare costs have on living standards in a HIC with a long established UHC system. While internationally, much attention has been given to identifying catastrophic healthcare expenditure [30, 31, 34], to date this has been a relatively overlooked area within Australia. The studies that have been conducted to date have only looked at older Australians with chronic health conditions [21] or specific chronic health conditions [16], and none have looked at the population as a whole.
Australia's healthcare system
Australia's publically financed national universal health insurance scheme, Medicare, was introduced to promote equity by improving access and affordability of health services. Through Medicare, patients are able to access treatment in public hospitals free of charge, and receive subsidised out of hospital treatment. Patients are provided a rebate benefit for services that are utilised for out of hospital treatment. The rebate amount is based upon a proportion of a schedule of fees covering each type of service. For example, for a consultation with a General Practitioner lasting 20 min or more, the schedule fee in 2017 is $71.70, and the benefit is 100% of the schedule fee, or $71.70; a blood test associated with diabetes management has a schedule fee of $16.80 and the benefit is 75% of the schedule fee, or $12.80 [12]. While public hospitals are managed by the state, most out of hospital services are delivered by private providers. The actual amount charged by providers for services is set by the providers themselves, and these charges are not regulated, meaning that providers are able to set their fees above the schedule fee. Any difference between the price providers charge for a service and the rebate amount is paid by patients 'out-of-pocket'. For illustration, if a provider charged $25.00 for a blood test associated with diabetes management, Medicare would provide a rebate of $12.80 (75% of the schedule fee), leaving the patient to pay $12.20. Medicare has policies designed to help protect patients from high out of pocket costs. Health Care Cards are provided to welfare recipients and low income earners, and entitles holders to pay a lower out of pocket fee for prescription medicines [13]. The 'Medicare Safety Net' and 'Extended Medicare Safety Net' Programs also provide higher rebates if an individual or family group reaches a certain amount of total expenditure on out of pocket fees within a calendar year. Any subsequent services or prescriptions will have a higher proportion subsidized for the rest of that calendar year [15]. Under the "Medicare Safety Net", once the threshold is reached 100% of the schedule fee for all services is rebated; and under the "Extended Medicare Safety Net" 80% of the actual out-of-pocket fees are rebated. For Health Care Card holders the threshold of total expenditure that needs to be reached to receive the "Extended Medicare Safety Net" is lower [14].
Dataset to be used for this study – HILDA
Microdata from waves 6 to 14 of the Household Income and Labour Dynamics in Australia (HILDA) Survey was utilised for this study. The HILDA survey is a longitudinal survey of private Australian households conducted annually since 2001, with release 14, containing data from waves 1 (2001) to wave 14 (conducted in 2014), the latest to be released at the time of writing this paper. The data are nationally representative of the Australian population living in private dwellings and aged 15 years and over. There were 6547 records of individuals aged 20 years and over in Wave 6 of the continuing sample HILDA survey, representing 10,381,000 people in the Australian population.
The survey sampling unit for Wave 1 from which the continuing sample is drawn was the household, with all members of the household being part of the sample that would be followed for the life of the survey. Household sampling was conducted in a three-stage approach. Initially, 488 Census Collection Districts (each containing 200 to 250 households) were selected. Within each district, 22 to 34 dwellings were then selected, and finally, up to three households within each dwelling were selected to be part of the sample [27]. The data is weighted to be representative of the Australian population and to account for any bias introduced through respondent attrition. The initial household cross-sectional weights in Wave 1 (upon which the weights in subsequent waves are dependent) were derived from the probability of selecting the household and were calibrated so that the weighted estimates match known benchmarks for the number of adults by the number of children and state by part of the state. The person-level weights were based on the household weights and then calibrated so that person weights match known benchmarks for sex by age, state by part of the state, state by labour force status, marital status and household composition. The longitudinal weights adjusted for attrition and were benchmarked against the characteristics of Wave 1. For a detailed description of HILDA weighting see Watson (2012). All dollar values in this study were adjusted to 2014 Australian dollars based upon Consumer Price Inflation (CPI) (2017) [23].
Household healthcare expenditure
Wave 6 onwards in the HILDA survey asked respondents to estimate the amount the household spent annually on fees paid to:
- Health practitioners;
- Medicines, prescriptions, pharmaceuticals, alternative medicines; and.
- Private health insurance.
The reported amounts were recorded separately for each of the three categories. For the purpose of this study, the three groups were summed to create a total health care expenditure amount. All results are reported at the individual level, but for household expenditure.
For this study, the total regular household income minus taxes was utilised. For the assessment of the distribution of healthcare expenditure, this measure of household income was equivalised using the OECD-modified (De [11]) equivalence scale. This accounted for the number of adults (aged 15 and over), and the number of children (aged 14 and under) living in the household.
Catastrophic healthcare expenditure
Within Australia, there is no accepted threshold for what proportion of a household's income makes expenditure on healthcare 'catastrophic'. Therefore, will be using a threshold of 10%, based upon a previous study conducted within Australia, although other cut-offs have been used internationally [31]. Individuals who have 10% or more of their total regular household income minus taxes that are taken up by household healthcare expenditure are deemed to have 'catastrophic' healthcare expenditure [22].
Impoverishing healthcare expenditure
Impoverishing healthcare expenditure is an expenditure that places a household's income below the poverty line. The 50% of the median equivalised income poverty line was utilised, which is the accepted cut-off for poverty measurement in Australia [9] and differs from the 60% used in other countries [18]. The total amount of household expenditure on healthcare was subtracted from total regular household income minus taxes. This was then equivalised, again, using the OECD-modified (De [11]) equivalence scale.
The initial descriptive analysis was undertaken to quantify the average out of pocket household expenditure on healthcare for each year between 2006 and 2014.
The proportion of people with catastrophic healthcare expenditure in each income decile was then identified. A generalised estimating equation model was then constructed to assess the odds of having catastrophic health care expenditure for those in different income decile. The model was adjusted for age, sex, self-assessed health status and year, with those in income decile ten used as the reference group.
A concentration index was constructed for each year between 2006 and 2014 to identify the cumulative proportion of people with catastrophic healthcare expenditure by cumulative proportion of the population, ranked by equivalised household income The concentration index (CI), and it's associated 95% confidence intervals, were computed as follows:
$$ 2{\sigma}_R^2\left(\frac{y_i}{\mu}\right)=\alpha +\beta {R}_i+{\varepsilon}_i $$
Where Ri is the rank of each individual, \( {\sigma}_R^2 \) is the variance of Ri, yi is the catastrophic healthcare status of each individual (i = 1, 2, 3….N), α is the intercept, εi is the error terms, and β is the CI [19].
Table 1 shows the average amount of household expenditure on healthcare practitioners; medicines, pharmaceuticals, and alternative medicines; and private health insurance. Average total household expenditure on healthcare items has only slightly increased after adjusting for inflation between 2006 and 2014 from $3133 to $3199 (in 2014 dollars). This appears to be mostly driven by increases in private health insurance expenditure, which was on average $1242 in 2006 and increased steadily to $1557 in 2014. Average expenditure on healthcare practitioners decreased slightly between 2006 and 2014 from $1188 in 2006 to $1099 in 2014, and expenditure on medicines, pharmaceuticals, and alternative medicines remained somewhat constant.
Mean total household out of pocket expenditure on healthcare costs; healthcare practitioners; medicines, pharmaceuticals, alternative medicines; and private health insurance, 2006–2014
Std Error of Mean
Total healthcare expenditure
Expenditure on healthcare professionals
Expenditure on medicine, pharmaceuticals and alternate medicine
Expenditure on private health insurance
Table 2 shows that the proportion of people with catastrophic healthcare expenditure decreases with income decile – with those in the lowest income decile having the highest percentage of people with catastrophic healthcare expenditure. The concentration index for the distribution of catastrophic expenditure was − 0.39 (95% CI: -0.43, − 0.34) in 2006, and increased to − 0.46 (95% CI: -0.50, − 0.42), showing an increase in the distribution of catastrophic healthcare expenditure towards those of lower income over time.
Proportion of households with catastrophic healthcare expenditure by decile, 2006–2014
Income quintile
Concentration Index
−0.39 (−0.43, −0.34)
− 0.53 (− 0.58, − 0.48)
−0.46 (− 0.50, − 0.42)
Relative to those in the highest income decile, there was an increasing likelihood of having catastrophic healthcare expenditure with decreasing income decile. After adjusting for age, self-reported health status, and year, those in income decile one had 15.63 times the odds (95% CI: 10.88–22.43) of having catastrophic health expenditure compared to those in income decile ten (Table 3).
Generalised Estimating Equation model of likelihood of having catastrophic healthcare expenditure
95% Confidence Limits
<.0001
Income decile 1
Income decile 10
Very poor self assessed health
Poor self assessed health
Fair self assessed health
Good self assessed health
Excellent self assessed health
Finally, we estimated the number of people who would have been classified as being in income poverty, had income been adjusted for the amount of healthcare expenditure. In 2006, an additional 141,000 people were in income poverty. In 2014, an additional 285,000 people were in income poverty (Table 4).
Additional number of people who would be in income poverty when household income is adjusted for household healthcare expenditure
Actual number of people in income poverty
Number of people in income poverty, adjusted for healthcare costs
Additional number of people in poverty
Average household out-of-pocket expenditure on healthcare – covering expenditure on health practitioners, medication and private health insurance premiums – has remained relatively constant after adjusting for inflation between 2006 and 2014 for the general adult population in Australia. However, those with lower incomes were more likely to have catastrophic healthcare expenditures (spending 10% of more of household income on healthcare) over this time period, and between 2006 and 2014 there was increasing inequality in the distribution of catastrophic healthcare expenditure towards those with lower income. The impact of household healthcare expenditure on household living standards was such that after adjusting household income for healthcare expenditure in excess of 200,000 additional people would be classified as being in income poverty within Australia in 2014.
No previous study has sought to assess the distribution of the impact of out-of-pocket expenditure, nor sought to assess the impoverishing consequences of healthcare expenditure in Australia. Co-payments and the impact they have had on accessing primary health care services in Australia was discussed by Laba et al. [20], and a previous study has shown that 1 in 4 Australians with a chronic health condition skip care due to the cost [6]. This highlights the importance of assessing the level of out-of-pocket expenditure on healthcare and identifying population groups who may be disproportionately affected.
The use of self-reported healthcare expenditure is a key weakness of this study, which is also common to all previous studies that used individual-level data to assess of out-of-pocket health care expenditure in Australia. It could be questioned whether individuals are able to accurately recall the amount they have spent on healthcare, which may have influenced the accuracy of the results. However, the amount of expenditure reported in this study was similar to the amount reported in a previous study on healthcare expenditure [35]. Future research may be able to make better use of health administrative data to overcome these issues or use short recall periods [10].
Financial risk protection is a core objective of universal health coverage [33]. Although Australia has a universal health care system, this study has demonstrated that Australia's health system may not be protecting its most vulnerable citizens against catastrophic health expenditure and income poverty, which disproportionally burdens the most disadvantaged people within the population. If a health care system is to meet the objectives of universal health coverage, total contributions should be based on ability to pay, and health care services should be allocated according to need, which means poorer people should receive greater health care benefits due to greater health care needs [33]. Through the Extended Medicare Safety Net Scheme the Australian government seeks to do this. However this study indicates that Australia's universal health system appears to not safeguard the poorest people in society, which are the people who need it the most, against the financial hardship associated with accessing health care. Universal health systems should develop in a way that does not impose harm to other social sectors in people's lives by imposing catastrophic health expenditures upon households.
Out-of-pocket payments are considered to be the most regressive form of financing a health system [33]. These results highlight the financial impact experienced by households as a consequence of this regressive approach to providing health care to the population. The findings clearly demonstrate the importance of vigilance to ensure ongoing progress towards universal health coverage, rather than assuming that financial risk protection is an inevitable outcome of having a universal health system.
High Income Country: defined by the World Bank as a country with a gross national income per capita US$12,056 or more in 2017
HILDA:
Household Income and Labour Dynamics in Australia: a longitudinal survey of private Australian households
OECD:
Oganisation for Economic Co-operation Development: is an intergovernmental economic organisation with 36 member countries, founded in 1961 to stimulate economic progress and world trade
UHC:
Universal Health Care: is a health care system that provides health care and financial protection to all citizens of a particular country
EC received salary support from a National Health and Medical Research Council (NHMRC) Career Development Fellowship (APP1159536).
The HILDA data utilised in this study is available upon request from the University of Melbourne.
EC conceived the study design and led the overall study. HF drafted the manuscript. DL undertook the analysis. All authors contributed to the interpretation of the results and editing of the final manuscript. All authors read and approved the final manuscript.
Associate Professor Callander receives part of her salary from a National Health and Medical Research Council (NHMRC) Career Development Fellowship. All authors declare that they have no conflicts of interest that are directly or indirectly related to the research.
School of Medicine, Griffith University – Gold Coast campus, G05 Room 2.24, Southport, Queensland, 4125, Australia
College of Public Health, Medical and Veterinary Science, James Cook University, Townsville, QLD, 4810, Australia
Alavinia SM, Burdorf A. Unemployment and retirement and ill-health: a cross-sectional analysis across European countries. Int Arch Occup Environ Health. 2008;82(1):39–45.View ArticleGoogle Scholar
Australian Institute of Health and Welfare (AIHW) (2016). Australia's health 2014. In: Understanding health and illness. Canberra: ACT, Australian Government; 2016.Google Scholar
Becker GS. Health as human capital: synthesis and extensions. Oxf Econ Pap. 2007;59:379–410.View ArticleGoogle Scholar
Callander E, Schofield D. Arthritis and the risk of falling into poverty: a survival analysis using Australian data. Arthritis Rheumatol. 2015a;68(1):255–62.View ArticleGoogle Scholar
Callander E, Schofield D. Type 2 diabetes mellitus and the risk of falling into poverty: an observational study. Diabetes Metab Res Rev. 2016;32(6):581–8.View ArticleGoogle Scholar
Callander EJ, Corscadden L, Levesque J-F. Out-of-pocket healthcare expenditure and chronic disease – do Australians forgo care because of the cost? Aust J Prim Health. 2017;23(1):15–22.View ArticleGoogle Scholar
Callander EJ, Schofield DJ. Psychological distress and the increased risk of falling into poverty: a longitudinal study of Australian adults. Soc Psychiatry Psychiatr Epidemiol. 2015b;50(10):1547–56.View ArticleGoogle Scholar
Chan M, Brundtland G. Universal health coverage: an affordable goal for all. Geneva: World Health Organisation; 2016.Google Scholar
Community Affairs Reference Committee. A hand up not a hand out: Renewing the fight against poverty. Senate Inquiry into Poverty 2004. Canberra: The Senate; 2004.Google Scholar
Dalziel K, Li J, Scott A, Clarke P. Accuracy of patient recall for self-reported doctor visits: is shorter recall better? Health Econ. 2018;27(11):1684*1698.View ArticleGoogle Scholar
De Vos K, Zaidi MA. Equivalence scale sensitivity of poverty statistics for the member states of the European community. Rev Income Wealth. 1997;43(3):319–33.View ArticleGoogle Scholar
Department of Health. Medicare Benefits Schedule Book: Operating from 01 September 2017. Canberra: Australian Government; 2017.Google Scholar
Department of Human Services. (2019a). "Medicare safety net 2019 table of thresholds." Retrieved 18 January, 2019, from https://www.humanservices.gov.au/individuals/services/medicare/medicare-safety-net/thresholds/2019-table-thresholds.
Department of Human Services. (2019b). "What the benefits are of a health care card." Retrieved 18 January, 2019, from https://www.humanservices.gov.au/individuals/enablers/what-benefits-are-health-care-card/39581.
Duckett S, Willcox S. The Australian health care system. Victoria: Australia, Oxford University Press; 2015.Google Scholar
Essue B, Kelly P, Roberts M, Leeder S, Jan S. We can't afford my chronic illness!The out-of-pocket burden associated with managing chronic obstructive pulmonary disease in western Sydney. J Health Serv R Policy. 2011;16:226–31.View ArticleGoogle Scholar
Farag M, NandaKumar A, Wallack S, Hodgkin D, Gaumer G, Erbil C. The income elasticity of health care spending in developing and developed countries. Int J Health Care Finance Econ. 2012;12(2):145–62.View ArticleGoogle Scholar
Gordon D. Poverty and social exclusion in Britain. Bristol: The Policy Press; 2006.Google Scholar
Kakwani N, Wagstaff A, Van Doorslaer E. Socioeconomic inequalities in health: measurement, computation, and statistical inference. J Econ. 1997;77(1):87–103.View ArticleGoogle Scholar
Laba T-L, Usherwood T, Leeder S, Yusuf F, Gillespie J, Perkovic V, Wilson A, Jan S, Essue B. Co-payments for health care: what is their real cost? Aust Health Rev. 2015;39(1):33–6.View ArticleGoogle Scholar
McRae I, Yen L, Jeon Y-H, Herath PM, Essue B. Multimorbidity is associated with higher out-of-pocket spending: a study of older Australians with multiple chronic conditions. Aust J Prim Health. 2013;19(2):144–9.View ArticleGoogle Scholar
McRae I, Yen L, Jeon Y, Herath M, Essue B. The health of senior Australians and the out-of-pocket healthcare costs they face. Canberra. In: National Seniors Australia; 2012.Google Scholar
Reserve Bank of Australia. (2017). "Measures of consumer Price inflation." Retrieved 1 January, 2017.Google Scholar
Schofield D, Shrestha R, Passey M, Earnest A, Fletcher S. Chronic disease and labour force participation among older Australians. Med J Aust. 2008;189:447–50.PubMedGoogle Scholar
Schofield D, Shrestha R, Percival R, Passey M, Kelly S, Callander E. Economic impacts of illness in older workers: quantifying the impact of illness on income, tax revenue and government spending. BMC Public Health. 2011;11(418).Google Scholar
Senate Standing Committees on Community Affairs. Out-of-pocket costs in Australian healthcare. Canberra: Commonwealth of Australia; 2014.Google Scholar
Summerfield M, Freidin S, Hahn M, Ittak P, Li N, Macalalad N, et al. HILDA User Manual – Release 12. Melbourne: The University of Melbourne; 2013.Google Scholar
van den Berg T, Schuring M, Avendano M, Mackenbach J, Burdorf A. The impact of ill health on exit from paid employment in Europe among older workers. Occup Environ Med. 2010;67(12):845–52.View ArticleGoogle Scholar
van Rijn RM, Robroek SJ, Brouwer S, Burdorf A. Influence of poor health on exit from paid employment: a systematic review. Occup Environ Med. 2014;71(4):295–301.View ArticleGoogle Scholar
Wagstaff A, Doorslaer E v. Catastrophe and impoverishment in paying for health care: with applications to Vietnam 1993–1998. Health Econ. 2003;12(11):921–33.View ArticleGoogle Scholar
Wagstaff A, Flores G, Hsu J, Smitz M-F, Chepynoga K, Buisman LR, van Wilgenburg K, Eozenou P. Progress on catastrophic health spending in 133 countries: a retrospective observational study. Lancet Glob Health. 2018;6(2):e169–79.View ArticleGoogle Scholar
Wong CY, Greene J, Dolja-Gore X, Gool K. The rise and fall in out-of-pocket costs in Australia: an analysis of the strengthening Medicare reforms. Health Econ. 2017;26(8):962–79.View ArticleGoogle Scholar
World Health Organisation. World health report, 2010: health systems financing the path to universal coverage. In: World health report, 2010: health systems financing the path to universal coverage; 2010.View ArticleGoogle Scholar
Xu K, Evans DB, Kawabata K, Zeramdini R, Klavus J, Murray CJ. Household catastrophic health expenditure: a multicountry analysis. Lancet. 2003;362(9378):111–7.View ArticleGoogle Scholar
Yusuf F, Leeder S. Can't escape it: the out-of-pocket cost of health care in Australia. MJA. 2013;199(7):475–8.PubMedGoogle Scholar
|
CommonCrawl
|
\begin{document}
{\begin{flushleft}\baselineskip9pt\scriptsize
\end{flushleft}}
\setcounter{page}{1} \thispagestyle{empty}
\maketitle
\begin{abstract} In proving the local $T_b$ Theorem for two weights in one dimension \cite{SaShUT} Sawyer, Shen and Uriarte-Tuero used a basic theorem of Hyt\"{o}nen \cite{Hy} to deal with estimates for measures living in adjacent intervals. Hyt\"{o}nen's theorem states that the off-testing condition for the Hilbert transform is controlled by the Muckenhoupt's $A_2$ and $A^*_2$ conditions. So in attempting to extend the two weight $T_b$ theorem to higher dimensions, it is natural to ask if a higher dimensional analogue of Hyt\"{o}nen's theorem holds that permits analogous control of terms involving measures that live on adjacent cubes. In this paper we show that it is not the case even in the presence of the energy conditions used in one dimension \cite{SaShUT}. Thus, in order to obtain a local $T_b$ theorem in higher dimensions, it will be necessary to find some substantially new arguments to control the notoriously difficult nearby form. More precisely, we show that Hyt\"{o}nen's off-testing condition for the two weight fractional integral and the Riesz transform inequalities is not controlled by Muckenhoupt's $A_2^\alpha$ and $A_2^{\alpha,*}$ conditions and energy conditions. \end{abstract}
\maketitle
\section{Introduction}
Characterizing two-weight norm inequalities for singular integrals is an important, long standing open problem, only recently solved in one dimension by Lacey, Sawyer, Shen and Uriarte-Tuero in a two-part paper \cite{LaSaShUT}-\cite{La}. Hyt\"{o}nen \cite{Hy} later removed a technical hypothesis, and for his proof an important piece was to bound the bilinear form when two functions are supported on disjoint half-lines in terms only of (his variant of) the Muckenhoupt $A_2$ constants. As mentioned in the abstract, Sawyer, Shen and Uriarte-Tuero \cite{SaShUT} used Hyt\"{o}nen's theorem to estimate the difficult nearby form in the one-dimensional local $T_b$ theorem and it seems natural to ask whether a higher dimensional analogue of Hyt\"{o}nen's theorem is true in order to estimate the nearby form in the higher dimensional local $T_b$ Theorem. Our paper answers this question negatively, even if we assume the energy conditions $\mathcal{E}^\alpha_2, \mathcal{E}^{\alpha,*}_2$, as in the case of the one dimensional two-weighted local $T_b$ Theorem.
The key idea is the construction of two measures on the plane placed close to each other (Figure 1) so that the off-testing condition fails but the $A_2$ and energy conditions hold using some one-dimensional results from \cite{LaSaUr}. Following closely the aforementioned work, we first construct two measures in $\mathbb{R}$ with the novelty being the use of a `wrong' homogeneity of the one-dimensional Riesz, Poisson and fractional integrals that accommodates all $0<\alpha<2$.
Let $0\leq \alpha<n$. For any locally finite Borel measure $\sigma$, we define the fractional integral on $\mathbb{R}^n$ by $$
I^a (f\sigma)(x)=\int_{\mathbb{R}^n} \frac{f(y)}{|x-y|^{n-\alpha}}d\sigma(y),\ x\notin\supp(f\sigma) \ $$ for any $f\in L^2(\sigma)$, and the Riesz transforms by $$
R^\alpha_m (f\sigma)(x)=\int_{{\mathbb R}^n}\frac{(t_m-x_m)f(t)}{|x-t|^{n+1-\alpha}}d\sigma(t),\quad x\notin\supp(f\sigma) , \ \ 1\leq m\leq n $$ where $x=(x_1,\dots ,x_n)$, $t=(t_1,\dots, t_n)$. If $\omega$ is another locally finite Borel measure, we say that the pair of weights $(\sigma,\omega)$ satisfies the fractional Muckenhoupt $\mathcal{A}^{\alpha}_2$ and $\mathcal{A}^{\alpha,*}_2$ conditions in ${\mathbb R}^n$ if \begin{equation*}\label{3}
\mathcal{A}^{\alpha}_2\equiv \sup_{Q\in\mathcal{I}}\mathcal{P}^\alpha(Q,\textbf{1}_{Q^c}\sigma)\frac{\omega(Q)}{|Q|^{1-\frac{\alpha}{n}}}<\infty \end{equation*} and \begin{equation*}\label{1}
\mathcal{A}^{\alpha,*}_2\equiv \sup_{Q\in\mathcal{I}}\mathcal{P}^\alpha(Q,\textbf{1}_{Q^c}\omega)\frac{\sigma(Q)}{|Q|^{1-\frac{\alpha}{n}}}<\infty \end{equation*} where $\mathcal{I}$ denotes the collection of all cubes $Q$ in ${\mathbb R}^n$ whose sides are parallel to the axes and $$
\mathcal{P}^\alpha(Q,\mu)=\int_{{\mathbb R}^n}\bigg(\frac{|Q|^\frac{1}{n}}{(|Q|^\frac{1}{n}+|x-x_Q|)^2}\bigg)^{n-\alpha}d\mu(x), $$ with $x_Q$ being the center of the cube, is the reproducing Poisson integral. We also say that the pair $(\sigma,\omega)$ satisfies the energy (resp. dual energy) condition if \begin{equation*} \left( \mathcal{E}_{2}^{\alpha }\right) ^{2}\equiv \sup_{Q=\dot{\cup}Q_{r}} \frac{1}{\sigma(Q)}\sum_{r=1}^{\infty }\left( \frac{\mathrm{P} ^{\alpha }\left( Q_{r},\mathbf{1}_{Q}\sigma \right) }{\left\vert Q_{r}\right\vert ^{\frac{1}{n}}}\right) ^{2}\left\Vert x-m^\omega_{Q_{r}}\right\Vert _{L^{2}\left( \mathbf{1}_{Q_{r}}\omega \right) }^{2}<\infty \end{equation*} \begin{equation*} \left( \mathcal{E}_{2}^{\alpha ,\ast }\right) ^{2}\equiv \sup_{Q=\dot{\cup} Q_{r}}\frac{1}{\omega(Q)}\sum_{r=1}^{\infty }\left( \frac{\mathrm{P}^{\alpha }\left( Q_{r},\mathbf{1}_{Q}\omega \right) }{ \left\vert Q_{r}\right\vert ^{\frac{1}{n}}}\right) ^{2}\left\Vert x-m^\sigma_{Q_{r}}\right\Vert _{L^{2}\left( \mathbf{1}_{Q_{r}}\sigma \right) }^{2}<\infty \end{equation*} where the supremum is taken over arbitrary decompositions of a cube $Q$ using a pairwise disjoint union of subcubes $Q_{r}$, where $$
\mathrm{P}^\alpha(Q,\mu)=\int_{{\mathbb R}^n}\frac{|Q|^\frac{1}{n}}{(|Q|^\frac{1}{n}+|x-x_Q|)^{n+1-\alpha}}d\mu(x) $$ is the standard Poisson integral and $$
m_I^\mu\equiv\frac{1}{\mu(I)}\int x d\mu(x)=\left\langle \frac{1}{|I|_\mu}\int x_1d\mu(x),...,\frac{1}{|I|_\mu}\int x_nd\mu(x)\right\rangle. $$
In the one-dimensional setting, Hyt\"{o}nen \cite{Hy} has characterized the restricted bilinear inequality, \begin{equation}\label{2}
\bigg|\int_{{\mathbb R}\backslash I}\bigg(\int_I\frac{f(y)}{|x-y|}d\sigma(y)\bigg)g(x)d\omega(x)\bigg|\lesssim\mathcal{D}\big|\big|f\big|\big|_{L^2(\sigma)}\big|\big|g\big|\big|_{L^2(\omega)} \end{equation} for all intervals $I$, in terms of the Muckenhoupt conditions, namely $$ \mathcal{D}\approx\sqrt{\mathcal{A}^0_2}+\sqrt{\mathcal{A}_2^{0,*}} $$ where $\mathcal{D}$ is the best constant in (\ref{2}). In \cite{Hy} this inequality was proved for complementary half-lines, where it was denoted that the passage to an interval and its complement is then routine. In \cite{SaShUT}, Hyt\"{o}nen's characterization was extended to fractional integrals on the line with the same proof. Namely, $$
\Bigg|\int_{{\mathbb R}\backslash I} \bigg(\int_{I} \frac{f(y)}{|x-y|^{1-\alpha}}d\sigma(y)\bigg)g(x)d\omega(x)\Bigg|\leq \mathcal{D}^\alpha \big|\big|f\big|\big|_{L^2(\sigma)}\big|\big|g\big|\big|_{L^2(\omega)} $$ and that $\sqrt{\mathcal{A}_2^\alpha}+\sqrt{\mathcal{A}_2^{\alpha,*}}\approx\mathcal{D}^\alpha$, where $\mathcal{D}^\alpha$ is the best constant in the inequality above.
Define the off-testing constants $\mathcal{T}_{\textit{off},\alpha}$ and $\mathcal{R}_{j,\textit{off},\alpha}$ in $\mathbb{R}^2$ by \begin{equation*}\label{4}
\mathcal{T}_{\textit{off},\alpha}^2=\sup_Q \frac{1}{\omega (Q)}\int_{{\mathbb R}^2\backslash Q}\bigg(\int_Q\frac{1}{|x-y|^{2-\alpha}}d\omega (y)\bigg)^2d\sigma (x) \end{equation*} \begin{equation*}\label{5}
\mathcal{R}_{m,\textit{off},\alpha}^2=\sup_Q \frac{1}{\omega (Q)}\int_{{\mathbb R}^2\backslash Q}\bigg(\int_Q\frac{t_m-x_m}{|x-t|^{3-\alpha}}d\omega (t)\bigg)^2d\sigma (x), \quad 1\leq m\leq 2 \end{equation*} for all cubes $Q \subset {\mathbb R}^2$ whose sides are parallel to the axes. \section{Main result} We show that in two dimensions, we can find a pair of measures such that $\mathcal{A}^\alpha_2$, $\mathcal{E}^\alpha_2$ and their dual conditions hold, but the off-testing condition fails. Thus, we prove that we cannot extend Hytonen's theorem in \cite{Hy} in higher dimensions. Indeed, Theorem \ref{Riesz} provides a counterexample to the analogue of Hyt\"{o}nen's theorem in $\mathbb{R}^2$ as the Riesz transforms for $\alpha=0$ are the extensions of the Hilbert transform in higher dimensions.
\begin{theorem} \label{fracint} For $0\leq \alpha<2$, there exists a pair of locally finite Borel measures $\sigma, \omega$ in ${\mathbb R}^2$ such that the fractional Muckenhoupt $\mathcal{A}^{\alpha}_2, \mathcal{A}^{\alpha,*}_2$ and the energy $\mathcal{E}_2^\alpha$, $\mathcal{E}_2^{\alpha,*}$ constants are finite but the off-testing constant $\mathcal{T}_{\textit{off},\alpha}$ is not. \end{theorem} \begin{theorem}\label{Riesz} For $0\leq \alpha<2$, there exists a pair of locally finite Borel measures $\sigma, \omega$ in ${\mathbb R}^2$ such that the fractional Muckenhoupt $\mathcal{A}^{\alpha}_2, \mathcal{A}^{\alpha,*}_2$ and the energy $\mathcal{E}_2^\alpha$, $\mathcal{E}_2^{\alpha,*}$ constants are finite but the off-testing constants $\mathcal{R}_{m,\textit{off},\alpha}$ are not. \end{theorem}
\section{Proofs Of The Theorems}
We begin with the proof of Theorem \ref{fracint}. The proof of Theorem \ref{Riesz} will be very similar and we will only have to deal with the cancellation occurring in the kernel with Lemma \ref{lemma} being useful.
\begin{proof}[Proof of Theorem \ref{fracint}] First we build two measures in ${\mathbb R}$, generalizing the work done in \cite{LaSaUr}, and then they will be used for our two dimensional construction. $$ \underline{\textsc{The One-Dimensional Construction}} $$
Given $0\leq\alpha<2$, choose $\frac{1}{3}\leq b<1$ such that $\frac{1}{9}\leq \left(\frac{1-b}{2}\right)^{2-\alpha}\leq\frac{1}{3}$. Let $s_0^{-1}=\left(\frac{1-b}{2}\right)^{2-\alpha}$. Recall the middle-$b$ Cantor set $\mathrm{E}_b$ and the Cantor measure $\ddot{\omega}$ on the closed interval $I^0_1=[0,1]$. At the $k$th generation in the construction, there is a collection $\{I_j^k\}_{j=1}^{2^k}$ of $2^k$ pairwise disjoint closed intervals of length $|I_j^k|=\left(\frac{1-b}{2}\right)^k$. The Cantor set is defined by $\mathrm{E}_b=\bigcap_{k=1}^{\infty}\bigcup_{j=1}^{2^k}I_j^k$ and the Cantor measure $\ddot{\omega}$ is the unique probability measure supported in $\mathrm{E}$ with the property that is equidistributed among the intervals $\{I_j^k\}_{j=1}^{2^k}$ at each scale $k$, i.e $$ \ddot{\omega}(I^k_j)=2^{-k},\ \ \ \ k \geq 0, 1 \leq j \leq 2^k. $$ We denote the removed open middle $b$th of $I_j^k$ by $G^k_j$ and by $\ddot{z}^k_j$ its center. Following closely \cite{LaSaUr}, we define $$ \ddot{\sigma}=\sum_{k,j}s^k_j\delta_{\ddot{z}^k_j} $$
where the sequence of positive numbers $s^k_j$ is chosen to satisfy $\displaystyle \frac{s^k_j\ddot{\omega}(I^k_j)}{|I^k_j|^{4-2\alpha}}=1$, i.e. $$ s^k_j=\left(\frac{2}{s_0^2}\right)^{k}, \ \ k \geq 0,\ 1 \leq j \leq 2^k. $$ \textsc{The Testing Constant is Unbounded}. Consider the following operator $$
\ddot{T}f(x)=\int_{\mathbb R}\frac{f(y)}{|x-y|^{2-\alpha}}dy $$ Note that \begin{eqnarray*}\label{testing value} \ddot{T}\ddot{\omega}(\ddot{z}^k_1) \!=\!
\int_{I_1^0}\frac{d\ddot{\omega}(y)}{|\ddot{z}_1^k-y|^{2-\alpha}} \!\geq\!
\int_{I_1^k}\frac{d\ddot{\omega}(y)}{|\ddot{z}_1^k-y|^{2-\alpha}} \!\geq\! \frac{\ddot{\omega} (I_1^k)}{\left(\frac{1}{2}\left(\frac{1-b}{2}\right)^{k}\right)^{2-\alpha}} \!\approx\! \left(\frac{s_0}{2}\right)^k \end{eqnarray*}
since $|\ddot{z}_1^k-y|\leq|\ddot{z}_1^k|$ for $y\in I_1^k$ and $\ddot{z}_1^k=\frac{1}{2}(\frac{1-b}{2})^{k}$. Similar inequalities hold for the rest of $\ddot{z}^k_j$. This implies that the following testing condition fails: \begin{eqnarray} \int_{I_1^0}\left(\ddot{T}(\mathbf{1}_{I_1^0}\ddot{\omega})(x)\right)^2d\ddot{\sigma}(y) \gtrsim \sum_{k=1}^ \infty\sum_{j=1}^{2^k}s^k_j\cdot \left(\frac{s_0}{2}\right)^{2k} = \label{testinfinity} \sum_{k=1}^\infty\sum_{j=1}^{2^k}\frac{1}{2^k}=\infty \end{eqnarray} \textsc{The $\ddot{\mathcal{A}}_2$ Condition}. Let us now define $$
\ddot{\mathcal{P}}(I,\mu)=\int_\mathbb{R}\left(\frac{|I|}{\left(|I|+|x-x_I|\right)^2}\right)^{2-\alpha}\!\!\!\!d\mu(x) $$ and the following variant of the $A_2^\alpha$ condition: $$ \ddot{\mathcal{A}}_2^\alpha(\ddot{\sigma},\ddot{\omega})=\sup_I \ddot{\mathcal{P}}(I,\ddot{\sigma}) \cdot \ddot{\mathcal{P}}(I,\ddot{\omega}) $$ where the supremum is taken over all intervals in $\mathbb{R}$. We verify that $\ddot{\mathcal{A}}_2^\alpha$ is finite for the pair $(\ddot{\sigma},\ddot{\omega})$. The starting point is the estimate
$$ \ddot{\sigma}(I^\ell_r)=\!\!\!\! \sum_{(k,j):\ddot{z}^k_j \in I^\ell_r} \!\!\!\!\! s^k_j = \sum_{k=l}^\infty2^{k-\ell}s_j^k = 2^{-\ell}\sum_{k=l}^\infty \left(\frac{4}{s_0^2}\right)^{\!k} \!\approx\! \left(\frac{2}{s_0^2}\right)^{\ell}=s^\ell_r $$ and from this, it immediately follows, \begin{equation}\label{varclass_a2}
\frac{\ddot{\sigma}(I^\ell_j)\ddot{\omega}(I^\ell_j)}{|I_j^\ell|^{4-2\alpha}}\approx
\frac{s_j^\ell \ddot{\omega}(I^\ell_j)}{|I_j^\ell|^{4-2\alpha}}=1,\ \text{for }\ell \geq 0,\ 1 \leq j \leq 2^\ell. \end{equation} Now from the definition of $\ddot{\sigma}$ we get, \begin{eqnarray} \label{poisson sigma} \ddot{\mathcal{P}}(I^\ell_r,\ddot{\sigma}) &\leq&
\frac{\ddot{\sigma}(I^\ell_r)}{|I^\ell_r|^{2-\alpha}}+
\int_{I_1^0\backslash I^\ell_r}\left(\frac{|I^\ell_r|}{\left(|I^\ell_r|+|x-x_{I^\ell_r}|\right)^2}\right)^{2-\alpha}\!\!\!\!d\ddot{\sigma}(x)\\ &\leq&
\frac{\ddot{\sigma}(I^\ell_r)}{|I^\ell_r|^{2-\alpha}}+
\sum_{m=0}^{\ell}\sum_{k=m}^\infty \frac{2^{k-m}s^k_j\ |I^\ell_r|^{2-\alpha}}{\left(|I^\ell_r|+b\left(\frac{1-b}{2}\right)^m\right)^{4-2\alpha}}\notag\\ &\lesssim &
\frac{\ddot{\sigma}(I^\ell_r)}{|I^\ell_r|^{2-\alpha}}+
\sum_{m=0}^{\ell}\frac{2^{-m}|I^\ell_r|^{2-\alpha}\left(\frac{4}{s_0^2}\right)^m}{\left(b\left(\frac{1-b}{2}\right)^{m-\ell}|I^\ell_r|\right)^{4-2\alpha}}\notag\\ &=&
\frac{\ddot{\sigma}(I^\ell_r)}{|I^\ell_r|^{2-\alpha}}+
\frac{b^{2\alpha-4}}{|I^\ell_r|^{2-\alpha}} \left(\frac{1}{s_0^2}\right)^{\!\!\ell} \sum_{m=0}^{\ell}2^m\notag\\ &\lesssim&
\frac{\ddot{\sigma}(I^\ell_r)}{|I^\ell_r|^{2-\alpha}}+
\frac{s_r^\ell}{|I^\ell_r|^{2-\alpha}} \approx
\frac{\ddot{\sigma}(I^\ell_r)}{|I^\ell_r|^{2-\alpha}}\notag \end{eqnarray} and using the uniformity of $\ddot{\omega}$, \begin{eqnarray} \label{poisson omega} \ddot{\mathcal{P}}(I^\ell_r,\ddot{\omega}) &\leq&
\frac{\ddot{\omega}(I^\ell_r)}{|I^\ell_r|^{2-\alpha}}
+ \int_{I_1^0\backslash I^\ell_r}\left(\frac{|I^\ell_r|}{\left(|I^\ell_r|+|x-x_{I^\ell_r}|\right)^2}\right)^{2-\alpha}\!\!\!\!d\ddot{\omega}(x)\\ &\leq&
\frac{\ddot{\omega}(I^\ell_r)}{|I^\ell_r|^{2-\alpha}}+
\sum_{k=1}^\ell \frac{|I^\ell_r|^{2-\alpha}\ \ddot{\omega}(I^k_{j_k})}{\left(|I^\ell_r|+b\left(\frac{1-b}{2}\right)^{k-1}\right)^{4-2\alpha}}\notag\\ &\leq&
\frac{\ddot{\omega}(I^\ell_r)}{|I^\ell_r|^{2-\alpha}}+
\sum_{k=1}^\ell \frac{|I^\ell_r|^{2-\alpha}\ \ddot{\omega}(I^k_{j_k})}{\left(b\left(\frac{1-b}{2}\right)^{k-1-\ell}|I^\ell_r|\right)^{4-2\alpha}}\notag \end{eqnarray} \begin{eqnarray} &\lesssim&
\frac{\ddot{\omega}(I^\ell_r)}{|I^\ell_r|^{2-\alpha}}+
\frac{2^{-\ell}}{|I^\ell_r|^{2-\alpha}} =
2\frac{\ddot{\omega}(I^\ell_r)}{|I^\ell_r|^{2-\alpha}}\notag , \end{eqnarray} where $I^k_{j_k}\subset I^{k-1}_t$, $I^\ell_r\subset I^{k-1}_t $ and $I^k_{j_k}\cap I^\ell_r=\emptyset$, and where all the implied constants in the above calculations depend only on $\alpha$. From (\ref{poisson sigma}), (\ref{poisson omega}) and \eqref{varclass_a2}, we see that $$ \ddot{\mathcal{P}}(I_r^\ell,\ddot{\sigma}) \ddot{\mathcal{P}}(I_r^\ell,\ddot{\omega})\lesssim 1. $$ Let us now consider an interval $I\subset I_1^0$ and let $A>1$ be fixed. Then, let $k$ be the smallest integer such that $\ddot{z}_j^k\in AI$; if there is no such $k$, then $AI\subsetneqq G_{j}^\ell$, for some $\ell$. We have the following cases:\\
\textbf{Case 1.} Assume that $I\subset AI \subsetneqq G_j^k \subset I_j^k$. If $|x_I-\ddot{z}^k_j|\leq \dist(x_I, \partial G^k_j)$ then, \begin{eqnarray} \label{first case} &&
\ddot{\mathcal{P}}(I,\ddot{\sigma})\ddot{\mathcal{P}}(I,\ddot{\omega})= |I|^{4-2\alpha}\int_{I^0_1}\frac{d\ddot{\sigma}(x)}{(|I|+|x-x_I|)^{4-2\alpha}}\int_{I^0_1}\frac{d\ddot{\omega}(x)}{(|I|+|x-x_I|)^{4-2\alpha}}\\ &\lesssim&
|I|^{4-2\alpha}\left(\frac{s^k_j}{|I|^{4-2\alpha}}+\frac{1}{|I^k_j|^{2-\alpha}}\int_{I^0_1\backslash G^k_j}\frac{|I^k_j|^{2-\alpha}d\ddot{\sigma}(x)}{(|I^k_j|+|x-x_{I^k_j}|)^{4-2\alpha}}\right)\frac{\ddot{\mathcal{P}}(I^k_j,\ddot{\omega})}{|I^k_j|^{2-\alpha}}\notag \\ &\lesssim&
\frac{|I|^{4-2\alpha}}{|I^k_j|^{2-\alpha}}\left(\frac{s^k_j}{|I|^{4-2\alpha}}+\frac{\ddot{\sigma}(I_j^k)}{|I^k_j|^{4-2\alpha}}\right)\frac{\ddot{\omega}(I^k_j)}{|I^k_j|^{2-\alpha}} \lesssim
\frac{\ddot{\sigma}(I_j^k)\ddot{\omega}(I^k_j)}{|I^k_j|^{4-2\alpha}}\approx 1\notag \end{eqnarray}
where in the first inequality we used the fact that $|x-x_I|\approx|x-\ddot{z}_j^k|\gtrsim |I_j^k|$ when $x\notin G^k_j$, since $x_I$ is ``close" to the center of $G^k_j$, and for the second inequality we used \eqref{poisson sigma} and \eqref{poisson omega}.
If $|x_I-\ddot{z}^k_j|> \dist(x_I, \partial G^k_j)$, we can assume $
b\left(\frac{1-b}{2}\right)^{m-1}\leq |I|\leq b\left(\frac{1-b}{2}\right)^m $
for some $m>k$, since for $m=k$ we have $|I|\approx |I^k_j|$, $|x-x_I|\gtrsim|x-x_{I^k_j}|$ for $x \notin G^k_j$ and we can repeat the proof of \eqref{first case}. Now let $I^m_t$ be the $m$-th generation interval that is closer to $I$ that touches the boundary of $G^k_j$. We have, using $|x_{I^m_t}-\ddot{z}^\ell_j|\lesssim|x_I-\ddot{z}^\ell_j|$, for all $\ell \geq 1, 1\leq j\leq 2^\ell$, $ \ddot{\mathcal{P}}(I,\ddot{\sigma})\lesssim\ddot{\mathcal{P}}(I^m_t,\ddot{\sigma}) $ and $\ddot{\mathcal{P}}(I,\ddot{\omega})\lesssim\ddot{\mathcal{P}}(I^m_t,\ddot{\omega})$, which imply $$ \ddot{\mathcal{P}}(I,\ddot{\sigma})\ddot{\mathcal{P}}(I,\ddot{\omega})\lesssim 1. $$
\textbf{Case 2.} Now assume $G_j^k\subset AI$. If $I^k_j \cap I=\emptyset$, then, using the minimality of $k$, $I \subset G^m_t$ for some $m<k$ and we can repeat the proof of \eqref{first case}. If $I^k_j \cap I \neq \emptyset$ then $|I| \lesssim |I^k_j| $ since otherwise $AI$ would contain $\ddot{z}^{k-1}_t$, contradicting the minimality of k if we fix $A$ big enough depending only on $\alpha$. Hence we have: $$
|G_j^k|+|x-\ddot{z}_j^k|\leq |G_j^k|+|x_I-\ddot{z}_j^k|+|x-x_I|\leq
\left(A+\frac{A}{2}\right)|I|+|x-x_I| $$ which implies that $$ \ddot{\mathcal{P}}(I,\ddot{\sigma}) \!\lesssim\!
\int_{I_1^0}\frac{|I|^{2-\alpha}}{\left(|G_j^k|+|x-\ddot{z}_j^k|\right)^{4-2\alpha}} d\ddot{\sigma}(x) \!\lesssim\!
\frac{|I|^{2-\alpha}}{|I_{j}^k|^{2-\alpha}}
\!\int_{I_1^0}\!\frac{|I_j^k|^{2-\alpha}}{\left(|I_j^k|+|x-\ddot{z}_j^k|\right)^{4-2\alpha}} d\ddot{\sigma}(x) $$ and similarly $$ \ddot{\mathcal{P}}(I,\ddot{\omega}) \lesssim
\frac{|I|^{2-\alpha}}{|I_{t}^k|^{2-\alpha}} \ddot{\mathcal{P}}(I_{j}^k,\ddot{\omega}) \leq \ddot{\mathcal{P}}(I_{j}^k,\ddot{\omega}) . $$ which implies $$ \ddot{\mathcal{P}}(I,\ddot{\sigma}) \ddot{\mathcal{P}}(I,\ddot{\omega}) \lesssim 1 $$ \textbf{Case 3.} If neither $G_j^k\cap AI\neq G_j^k$ nor $G_j^k\cap AI\neq AI$, note that $G_j^k\subset 3AI$ and we repeat again the proof of Case 2.
Thus, for any interval $I\subset I_1^0$, we have shown that $\ddot{\mathcal{P}}(I,\ddot{\sigma})\ddot{\mathcal{P}}(I,\ddot{\omega})\lesssim 1$, which implies \begin{equation}\label{var_a2_bdd} \ddot{\mathcal{A}_2^{\alpha}}(\ddot{\sigma},\ddot{\omega})<\infty \end{equation}
\textsc{The Energy Constants $\ddot{\mathcal{E}}$ and $\ddot{\mathcal{E}}^{*}$}. Now define the following variant of the energy constants \begin{eqnarray*} \ddot{\mathcal{E}} &=& \sup_{I=\dot{\bigcup}I_r} \frac{1}{\ddot{\sigma}(I)} \sum_{r\geq 1} \ddot{\omega}(I_r)E(I_r,\ddot{\omega})^2\ddot{\mathrm{P}}(I_r,\mathbf{1}_I\ddot{\sigma})^2\\ \ddot{\mathcal{E}}^{*} &=& \sup_{I=\dot{\bigcup}I_r} \frac{1}{\ddot{\omega}(I)} \sum_{r\geq 1} \ddot{\sigma}(I_r)E(I_r,\ddot{\sigma})^2\ddot{\mathrm{P}}(I_r,\mathbf{1}_I\ddot{\omega})^2 \end{eqnarray*} where the supremum is taken over the different intervals $I$ and all the different decompositions of $I=\dot{\bigcup}_{r\geq 1}I_r$, and $$
\ddot{\mathrm{P}}(I,\mu)=\int_\mathbb{R}\frac{|I|}{\left(|I|+|x-x_I|\right)^{3-\alpha}}d\mu(x), $$ $$
E(I,\mu)^2=\frac{1}{2}\frac{1}{\mu(I)^2}\int_{I}\int_{I} \frac{(x-x')^2}{|I|^2} d\mu(x')d\mu(x)=\frac{1}{\mu(I)}\cdot \left\Vert x-m_I^\mu \right\Vert^2_{L^2(\mathbf{1}_I\mu)}\leq 1. $$ We first show that $\ddot{\mathcal{E}}$ is bounded. We have \begin{eqnarray*} \ddot{\mathrm{P}}(I,\ddot{\sigma}) =
\int\frac{|I|}{\left(|I|+|x-x_I|\right)^{3-\alpha}}d\ddot{\sigma}(x) \!\!\!\!&\lesssim&\!\!\!\!
\sum_{n=0}^\infty \frac{\ddot{\sigma}\big((2^n+1)I\big)}{(2^n)|2^nI|^{2-\alpha}}\\ &\leq&\!\!\!\! \sum_{n=0}^\infty \inf_{x \in I}M^\alpha\ddot{\sigma}(x)2^{-n} \lesssim \inf_{x \in I}M^\alpha\ddot{\sigma}(x) \end{eqnarray*} where
$\displaystyle M^\alpha\mu(x)=\sup_{I\ni x}\frac{1}{|I|^{2-\alpha}}\int_Id\mu $ and the implied constants depend only on $\alpha$. Thus, given an interval $\displaystyle I=\dot{\cup}_{r\geq 1}I_r$, we have: $$ \sum_{r \geq 1}\ddot{\omega}(I_r)\ddot{\mathrm{P}}^2(I_r,\mathbf{1}_{I}\ddot{\sigma}) \leq \sum_{r\geq 1}\ddot{\omega}(I_r) \inf_{x \in I}\left(M^\alpha\mathbf{1}_{I}\ddot{\sigma}\right)^2(x) \leq \int_{I} \left(M^\alpha\mathbf{1}_{I}\ddot{\sigma}\right)^2(x)d\ddot{\omega}(x) $$ and so we are left with estimating the right hand term of the above inequality. We will prove the inequality \begin{equation} \label{maximal} \int_{I^\ell_r} \left(M^\alpha\mathbf{1}_{I^l_r}\ddot{\sigma}\right)^2(x)d\ddot{\omega}(x)\leq C \ddot{\sigma}(I^\ell_r). \end{equation} where the constant $C$ depends only on $\alpha$. This will be enough, since for an interval $I$ containing a point mass $\ddot{z}^\ell_r$ but no masses $\ddot{z}^k_j$ for $k<\ell$, we have \begin{eqnarray*}
\int_{I} \left(M^\alpha\ddot{\sigma}\right)^2(x)d\ddot{\omega}(x)=
\int_{I\cap I^\ell_r} \left(M^\alpha\mathbf{1}_{I\cap I^\ell_r}\ddot{\sigma}\right)^2(x)d\ddot{\omega}(x)
&\leq&
\int_{I^\ell_r}\left(M^\alpha\mathbf{1}_{I^\ell_r}\ddot{\sigma}\right)^2(x)d\ddot{\omega}(x)\\
&\leq&
\ddot{\sigma}(I^\ell_r)\approx \ddot{\sigma} (I) \end{eqnarray*} Since the measure $\ddot{\omega}$ is supported in the Cantor set $\mathrm{E}_b$, we can use the fact that for $x \in I^\ell_r\cap \mathrm{E}_b$, $$ M^\alpha(\mathbf{1}_{I^\ell_r}\ddot{\sigma})(x)\lesssim\!\!\! \sup_{\left( k,j\right) :x\in I_{j}^{k}}\frac{1}{\left\vert I_{j}^{k}\right\vert^{2-\alpha} }\int_{I_{j}^{k}\cap I_{r}^{\ell }}\!\!\!\!\!d\ddot{\sigma} \approx \!\!\!\!\! \sup_{\left( k,j\right) :x\in I_{j}^{k}}\!\!\!\!\frac{s_0 ^{-2(k\vee \ell) } 2^{k\vee \ell}}{s_0
^{-k}}\approx \frac{\ddot{\sigma}(I^\ell_r)}{|I^\ell_r|^{2-\alpha}}\approx\left(\frac{2}{s_0}\right)^\ell $$ Fix m and let the approximations $\ddot{\omega} ^{\left( m\right) }$ and $\ddot{\sigma}^{\left( m\right) }$ to the measures $\omega $ and $\ddot{\sigma}$ given by
\begin{eqnarray*} d\ddot{\omega} ^{\left( m\right) }\left( x\right) = \sum_{i=1}^{2^{m}}2^{-m}\frac{1 }{\left\vert I_{i}^{m}\right\vert }\mathbf{1}_{I_{i}^{m}}\left( x\right) dx \ \text{ and }\ \ddot{\sigma}^{\left( m\right) } = \sum_{k<m}\sum_{j=1}^{2^{k}}s_{j}^{k}\delta _{z_{j}^{k}}. \notag \end{eqnarray*} For these approximations we have in the same way the estimate for $ x\in \bigcup_{i=1}^{2^{m}}I_{i}^{m}$, \begin{equation*} M^\alpha\left( \mathbf{1}_{I_{r}^{\ell }}\ddot{\sigma}^{\left( m\right) }\right) \left( x\right) \lesssim \!\!\!\!\! \sup_{\left( k,j\right) :x\in I_{j}^{k}}\frac{1}{\left\vert I_{j}^{k}\right\vert^{2-\alpha} }\int_{I_{j}^{k}\cap I_{r}^{\ell }}\!\!\!d\ddot{\sigma} \approx \!\!\!\sup_{\left( k,j\right) :x\in I_{j}^{k}}\!\!\!\frac{\left( \frac{1}{s_0}\right) ^{k\vee \ell }\left( \frac{2}{s_0}\right) ^{k\vee \ell }}{\left( \frac{1}{s_0} \right) ^{k}} \leq C\left( \frac{2}{s_0}\right) ^{\ell } \end{equation*} Thus for each $m\geq n \geq \ell$ we have \begin{eqnarray*} \int_{I_{r}^{\ell }}\!\!\! M^\alpha\left( \mathbf{1}_{I_{r}^{\ell }}\ddot{\sigma} ^{\left( n\right) }\right) ^{2}\!\!d\ddot{\omega} ^{\left( m\right) } \!\leq\! C\!\!\!\sum_{i:I_{i}^{m}\subset I_{r}^{\ell }}\!\!\!\left( \frac{2}{s_0}\right)^{2\ell }\!\!\!\!\!2^{-m} \!=\! C2^{m-\ell }\!\!\left( \frac{2}{s_0}\right) ^{2\ell }\!\!\!\!2^{-m}=Cs_{r}^{\ell }\approx C\int_{I_{r}^{\ell }}d\ddot{\sigma} \end{eqnarray*} Now since $\ddot{\omega}^m$ converges weakly to $\ddot{\omega}$ and using the fact that $M^\alpha$ is lower semi-continuous we get: $$ \int_{I_{r}^{\ell }}\!\!\! M^\alpha\left( \mathbf{1}_{I_{r}^{\ell }}\ddot{\sigma} ^{\left( n\right) }\right) ^{2}\!\!d\ddot{\omega} \leq \liminf\limits_{m\rightarrow \infty} \int_{I_{r}^{\ell }}\!\!\! M^\alpha\left( \mathbf{1}_{I_{r}^{\ell }}\ddot{\sigma} ^{\left( n\right) }\right) ^{2}\!\!d\ddot{\omega} ^{\left( m\right)} \leq C\ddot{\sigma}(I_{r}^{\ell }) $$ Now, taking $n\rightarrow \infty$, by monotone convergence we get (\ref{maximal}). This proves \begin{equation}\label{pivotal}
\sum_{r \geq 1}\ddot{\omega}(I_r)\ddot{\mathrm{P}}^2(I_r,\mathbf{1}_{I}\ddot{\sigma})\leq C\ddot{\sigma}(I) \end{equation} which in turn implies $\ddot{\mathcal{E}}<\infty$ as $E(I_r,\ddot{\omega})\leq 1$.
Finally, we show that the dual energy constant $\ddot{\mathcal{E}}^{*}$ is finite. Let us show that for $I\subset I_1^0$ \begin{equation} \ddot{\sigma} (I)E(I,\ddot{\sigma} )^{2}\ddot{\mathrm{P}}(I,\ddot{\omega} )^{2}\lesssim \ddot{\omega} (I). \label{e.gettingE} \end{equation} as if we let $\{I_{r}\;:\;r\geq 1\}$ be any partition of $I$, (\ref{e.gettingE}) gives \begin{equation*} \sum_{r\geq 1}\ddot{\sigma} (I_{r})E(I_{r},\ddot{\sigma} )^{2}\ddot{\mathrm{P}} (I_{r},\ddot{\omega} )^{2}\lesssim \sum_{r\geq 1}\ddot{\omega} (I_{r})=\ddot{\omega} (I_{})\ . \end{equation*}
Now let us establish (\ref{e.gettingE}). We can assume that $E(I,\ddot{\sigma} )\neq 0$. Let $k$ be the smallest integer for which there is a $r$ with $\ddot{z}_{r}^{k}\in I$. And let $n$ be the smallest integer so that for some $s$ we have $\ddot{z}_{s}^{k+n}\in I $ and $\ddot{z}_{s}^{k+n}\neq \ddot{z}_{r}^{k}$. We have that \begin{eqnarray*}
E(I,\ddot{\sigma} )^2 &=& \frac{1}{2}\frac{1}{\ddot{\sigma}(I)^2}\int_I\int_I\frac{|x-x'|^2}{|I|^2}d\ddot{\sigma}(x)d\ddot{\sigma}(x')\\ &=&
\frac{1}{\ddot{\sigma}(I)^2}\left[\ddot{\sigma}(\ddot{z}_{r}^{k})\int_I\frac{|x-\ddot{z}_{r}^{k}|^2}{|I|^2}d\ddot{\sigma}(x)+\int_I\int_{I\backslash\{\ddot{z}_{r}^{k}\}}\frac{|x-x'|^2}{|I|^2}d\ddot{\sigma}(x)d\ddot{\sigma}(x')\right]\\ &\lesssim& \frac{\ddot{\sigma}(\ddot{z}_{r}^{k})\ddot{\sigma}(I\backslash\{\ddot{z}_{r}^{k}\})}{\ddot{\sigma}(I)^2}+\frac{\ddot{\sigma}(I\backslash\{\ddot{z}_{r}^{k}\})}{\ddot{\sigma}(I)}\lesssim \left( \frac{2}{s_0^2}\right) ^{n} \end{eqnarray*} Finally, $\ddot{\sigma}( I)\approx \left( \frac{2}{s_0^2}\right) ^{k}$, $ \ddot{\omega}( I)\approx 2^{-k-n}$, and $\ddot{\mathrm{P}}(I,\ddot{\omega} )\approx \left( \frac{s_0}{2} \right) ^{k}$, which proves \eqref{e.gettingE}.
$$ \underline{\textsc{The Two Dimensional Construction}} $$ It is time now to define the two dimensional measures that prove the statement of Theorem \ref{fracint}. For any set $E\subset \mathbb{R}^2$ let $$ \omega(E)=\sum_{n=0}^\infty \ddot{\omega}_n(E) $$ where $\ddot{\omega}_0(E)=\ddot{\omega}(E_x\cap I_1^0)$, $E_x$ the projection of $E$ on the x-axis, and $\ddot{\omega}_n$ are copies of $\ddot{\omega}_0$ at the intervals $[a_n,a_n+1]\times \{0\}$ with $k_n=a_{n+1}-(a_n+1)$ to be determined later. In the same way, let $$ \sigma(E)=\sum_{n=0}^\infty \ddot{\sigma}_n(E) $$ where $\ddot{\sigma}_0(E)=\ddot{\sigma}([E\cap (I_1^0\times\{\gamma_0\})]_x)$, and $\ddot{\sigma}_n$ are copies of $\ddot{\sigma}_0$ at the intervals $[a_n,a_n+1]\times \{\gamma_n\}$, where the height $\gamma_n$ will be determined later. \begin{center} \begin{tikzpicture} \label{Figure 2.1.}
\color{blue} \draw (-6,0) -- (-4,0); \draw (-3,0) -- (-1,0); \draw (0.3,0) -- (2.3,0); \draw (4.05,0) -- (6.05,0); \node (f) at (-5,-0.2) {$\omega$}; \node (g) at (-2,-0.2) {$\omega$}; \node (h) at (1.3,-0.2) {$\omega$}; \node (i) at (5.05,-0.2) {$\omega$};
\color{red} \draw (-6,1) -- (-4,1); \draw (-3,0.5) -- (-1,0.5); \draw (0.3,0.2) -- (2.3,0.2); \draw (4.05,0.1) -- (6.05,0.1); \node (a) at (-5,1.2) {$\sigma$}; \node (b) at (-2,0.7) {$\sigma$}; \node (c) at (1.3,0.4) {$\sigma$}; \node (d) at (5.05,0.3) {$\sigma$}; \color{black} \draw[decorate,decoration={brace}] (-6.1,0) -- (-6.1,1) node (k) at (-6.4,0.5){\footnotesize $\gamma_n$}; \draw[decorate,decoration={brace,mirror}] (-4,-0.2) -- (-3,-0.2) node (m) at (-3.5,-0.4){\footnotesize $k_n$}; \end{tikzpicture}
Figure 1 \end{center} \textsc{The $\mathcal{A}_2$ conditions.} We will now prove that both $\mathcal{A}^{\alpha}_2$ and $\mathcal{A}^{\alpha,*}_2$ constants are bounded. Let $Q$ be a cube in $\mathbb{R}^2$, $J^n_0=[a_n,a_n+1]\times \{0\}$ and $J^n_{\gamma_n}=[a_n,a_n+1]\times \{\gamma_n\}$. We take cases for $Q$. If $Q$ intersects only one of the intervals $J^n_0$, say $J^0_0$ for convenience, and $(Q\cap J^0_0)_x=:J_0$ we have: \begin{eqnarray*}
\mathcal{P}^\alpha(Q,\textbf{1}_{Q^c}\sigma)\frac{\omega(Q)}{|Q|^{1-\frac{\alpha}{2}}} &\lesssim &
\ddot{\mathcal{P}}(J_0,\ddot{\sigma})\frac{\ddot{\omega}(J_0)}{|J_0|^{2-\alpha}} +
\mathcal{P}^\alpha(Q,\textbf{1}_{(J^1_{\gamma_1})^c}\sigma)\frac{\ddot{\omega}(I_1^0)}{|Q|^{1-\frac{\alpha}{2}}}\\ &\leq& \ddot{\mathcal{A}}_2^\alpha(\ddot{\sigma},\ddot{\omega})+C<\infty \end{eqnarray*}
using (\ref{var_a2_bdd}) and taking $k_n$ large enough so that the second summand is bounded independently of the interval ($k_n=4^{2n\cdot \max\{(2-\alpha)^{-1},1\}}$ would do here). If $Q$ intersects more than one of the intervals $J^n_0$, it is easy to see, using that $Q$ is very big (since it intersects more than one of the intervals) and that $k_n$ is also large, that: \begin{eqnarray*}
\mathcal{P}^\alpha(Q,\textbf{1}_{Q^c}\sigma)\frac{\omega(Q)}{|Q|^{1-\frac{\alpha}{2}}} \lesssim 1 \end{eqnarray*} which of course shows that $\mathcal{A}^{\alpha}_2$ is bounded. Essentially using the same calculations we see that $\mathcal{A}^{\alpha,*}_2$ is bounded as well. \\ $ $\\
\textsc{Off-Testing Constant}. Let us now check that the off-testing constant is not bounded. Choose the cube $Q_n=[a_n,a_n+1]\times[0,-1]$. Then, \begin{eqnarray*}
\frac{1}{\omega (Q_n)}\!\int_{Q_n^c}\!\!\bigg[\!\int_{Q_n}\!\frac{d\omega (y)}{|x-y|^{2-\alpha}}\bigg]^2\!\!d\sigma (x) \!\geq\! \frac{1}{\ddot{\omega}(I_1^0)}\!\int_{I_1^0}\!\!\bigg[\!\int_{I_1^0}\!\frac{d\ddot{\omega} (y_1)}{\sqrt{(x_1-y_1)^2+\gamma_n^2}^{2-\alpha}}\bigg]^2\!\!d\ddot{\sigma}(x_{1}\!) \end{eqnarray*}
for $x=(x_1,x_2)$ and $y=(y_1,y_2)$. Taking $\gamma_n$ such that the last expression on the display above equals $n$ (note that this is feasible, since for $\gamma_n=0$, \eqref{testinfinity} gives infinity in the latter expression above) we have $$
\mathcal{T}_{\textit{off},\alpha}^2\geq \frac{1}{\omega (Q_n)}\int_{Q_n^c}\bigg[\int_{Q_n}\frac{d\omega (y)}{|x-y|^{2-\alpha}}\bigg]^2d\sigma (x)\geq n $$ and by letting $n\rightarrow \infty$ we obtain that the off-testing constant is not bounded. \\ $ $\\ \textsc{The Energy Conditions}. For the energy condition $\mathcal{E}_2^\alpha$ first, let $Q$ be a cube and $Q=\dot{\cup}Q_r$, where $\{Q_r\}_{r=1}^\infty$ is a decomposition of Q. Then we have $$ \frac{1}{\sigma(Q)}\sum_{r=1}^{\infty }\left( \frac{\mathrm{P} ^{\alpha }\left( Q_{r},\mathbf{1}_{Q}\sigma \right) }{\left\vert Q_{r}\right\vert ^{\frac{1}{2}}}\right) ^{2}\left\Vert x-m^\omega_{Q_{r}}\right\Vert _{L^{2}\left( \mathbf{1}_{Q_{r}}\omega \right) }^{2} \!\!\!\leq \frac{2}{\sigma(Q)}\sum_{r=1}^\infty \omega(Q_r)\big(\mathrm{P}^{\alpha }\left( Q_{r},\mathbf{1}_{Q}\sigma \right)\big)^2 $$ Assume that $Q$ intersects $m$ intervals of the form $J_0^n$. Then we have $m-2\lesssim \sigma(Q)\lesssim m$. The case $m=1$ is exactly the same as the one dimensional analog for $\ddot{\mathcal{E}}$. Assume $m=2$. Now we need to take cases for $Q_r$: \begin{enumerate}[(i)]
\item Let $Q^1$ be the set of cubes $Q_r$ that intersect only one of the intervals $J^n_0$. Then we have, following the proof of \eqref{pivotal}, that
\begin{equation*}
\sum_{Q_r \in Q^1}\omega(Q_r)\left(\mathrm{P}^{\alpha }\left( Q_{r},\mathbf{1}_{Q}\sigma \right)\right)^2\leq C\sigma(Q)
\end{equation*}
\item If $Q_r$ intersects both of the intervals $J^n_0$ then this $Q_r$ is unique since the family $\{Q_r\}_{r \in \mathbb{N}}$ forms a decomposition of $Q$. Therefore we have:
\begin{equation*}
\omega(Q_r)\left(\mathrm{P}^{\alpha }\left( Q_{r},\mathbf{1}_{Q}\sigma \right)\right)^2\lesssim \frac{\omega(Q_r)\sigma(Q)}{|Q_r|^{2-\alpha}}\sigma(Q)\lesssim \sigma(Q)
\end{equation*}
using the fact that $|Q_r|\gtrsim 4^2$ since it intersect two of the intervals $J^n_0$ and $\omega(Q_r)\lesssim 2, \sigma(Q)\lesssim 2$. \end{enumerate} For $m\geq 3$, again we take cases for $Q_r$: \begin{enumerate}[(i)]
\item If $Q_r$ intersects only one $J^n_0$ we again have, following the proof of \eqref{pivotal}, that
\begin{equation*}
\sum_{Q_r \in Q^1}\omega(Q_r)\left(\mathrm{P}^{\alpha }\left( Q_{r},\mathbf{1}_{Q}\sigma \right)\right)^2\leq C\sigma(Q)
\end{equation*}
\item If $Q_r$ intersects more than one of the intervals $J^n_0$, the last one being $J^{n_0}_0$ we have
\begin{equation*}
\omega(Q_r)\left(\mathrm{P}^{\alpha }\left( Q_{r},\mathbf{1}_{Q}\sigma \right)\right)^2\lesssim \frac{\omega(Q_r)\sigma(Q_r^-)^2}{|Q_r|^{2-\alpha}}+\omega(Q_r)\sum_{k=1}^m\frac{1}{4^{2k}|Q_r|^{2-\alpha}} \lesssim 2
\end{equation*}
where $Q_r^-$ contains all the intervals $J^n_0$ such that $n \leq n_0$. Again in the last inequality we use the fact that $Q_r$ is very big since it intersects at least two intervals $J^n_0$. Now since $Q_r$ form a decomposition of $Q$ we can have at most $m-1$ of these. \end{enumerate}
Combining the above cases, we obtain $$ \sum_{r=1}^\infty \omega(Q_r)\big(\mathrm{P}^{\alpha }\left( Q_{r},\mathbf{1}_{Q}\sigma \right)\big)^2\leq C\sigma(Q)+2m-2\leq 2C\sigma(Q) $$ and that proves the energy condition is bounded.
The dual energy $\mathcal{E}^{\alpha,*}_2$ can also be proved bounded with the same calculations as in the energy condition following the proof of \eqref{e.gettingE} instead of \eqref{pivotal} as in the first case above. This completes the proof of the Theorem \ref{fracint}. \end{proof} To obtain the same result for the Riesz transforms, we need to deal with the fact that the kernel is not positive. This prevents us from placing the masses for $\ddot{\sigma}$ at the center of the intervals $G^k_j$, as we did in the proof of Theorem \ref{fracint}. Since otherwise, if the point-mass $\ddot{\sigma}$ is located at the center of $G^k_j$, it would result in the cancellation of much of the mass not letting us deduce that the off testing condition for the Riesz transform is unbounded. The following lemma, whose proof follows closely the work in \cite{LaSaUr} but with a two dimensional twist, helps us overcome this problem, showing that, while not being able to place the point masses in the middle of $G^k_j$, we can place them far from the boundary. This enables us to show that the $\ddot{\mathcal{A}}_2$ condition is bounded, like in the proof of Theorem \ref{fracint}. First we need to define the operator $$
\ddot{R}f(x)=\int_\mathbb{R}\frac{(x-y)f(y)}{|x-y|^{3-\alpha}}dy $$ \begin{lemma}\label{lemma} For $k \geq 1,\ 1\leq j \leq 2^k$, write $G^k_j=(a^k_j,b^k_j)$. Then there exists $0<c<1$ that depends only on $\alpha$ such that $$ \ddot{R}\ddot{\omega}\!\left(\!a^k_j\!+\!c\left(\!\frac{1-b}{2}\right)^{k}\!b\!\right) \approx \left(\frac{s_0}{2}\right)^k $$ where $\ddot{\omega}$ is the measure defined above. \end{lemma} \begin{proof} Fix $k$. We have $$ \ddot{R}\ddot{\omega}\!\left(\!a^k_1\!+\!c\left(\!\frac{1-b}{2}\right)^{k}\!b\!\right) \!\leq\! \ddot{R}\ddot{\omega}\!\left(\!a^k_j\!+\!c\left(\!\frac{1-b}{2}\right)^{k}\!b\!\right) \!\leq\! \ddot{R}\ddot{\omega}\!\left(\!a^k_{2^k}\!+\!c\left(\!\frac{1-b}{2}\right)^{k}\!b\!\right) $$ from monotonicity. So it is enough to prove the following: $$ \left(\frac{s_0}{2}\right)^k \lesssim \ddot{R}\ddot{\omega}\!\left(\!a^k_1\!+\!c\left(\!\frac{1-b}{2}\right)^{k}\!b\!\right) \leq \ddot{R}\ddot{\omega}\!\left(\!a^k_{2^k}\!+\!c\left(\!\frac{1-b}{2}\right)^{k}\!b\!\right) \lesssim \left(\frac{s_0}{2}\right)^k $$ We start with right hand inequality. Following the definitions of $\ddot{R},\ddot{\omega}$ we get \begin{eqnarray*} \ddot{R}\ddot{\omega}\!\left(\!a^k_{2^k}\!+\!c\left(\!\frac{1-b}{2}\right)^{k}\!b\!\right) \!\!\!\!&\leq&\!\!\!\! \int_{[0,a^k_{2^k}]}\frac{d\ddot{\omega}(y)}{\left(a^k_{2^k}\!+\!c\left(\frac{1-b}{2}\right)^{k}\!b\!-\!y\right)^{2-\alpha}}\\ \!\!\!\!&\leq&\!\!\!\! \sum_{\ell=1}^{k}\frac{2^{-\ell}}{\left(a^k_{2^k}\!+\!c\left(\frac{1-b}{2}\right)^{k}\!b\!-\!\left[1\!-\!\left(\frac{1-b}{2}\right)^{\ell-1}\!\left(\frac{1+b}{2}\right)\right]\right)^{2-\alpha}}\\ &\approx&\!\!\!\! \frac{2^{-k}}{c^{2-\alpha}s_0^{-k}}\!+\!\sum_{\ell=1}^{k-1}\frac{2^{-\ell}}{s_0^{-\ell}\!\left[\frac{1+b}{2}\!+\!\left(\frac{1-b}{2}\right)^{k-\ell+1}\!\left[cb\!-\!\frac{1+b}{2}\right]\right]^{2-\alpha}}\\ &\leq&\!\!\!\! \frac{2^{-k}}{c^{2-\alpha}s_0^{-k}}\!+\!\sum_{\ell=1}^{k-1}\frac{2^{-\ell}}{s_0^{-\ell}\!\left[\frac{1+b}{2}\!-\!\frac{1+b}{2}\left(\frac{1-b}{2}\right)^{k-\ell+1}\right]^{2-\alpha}} \end{eqnarray*} since $a_{2^k}^k=1\!-\!\left(\frac{1+b}{2}\right)\left(\frac{1-b}{2}\right)^k$. The square bracket inside the last fraction is minimized for $\ell=k-1$ and we get the inequality $$ \ddot{R}\ddot{\omega}\!\left(\!a^k_{2^k}\!+\!c\left(\!\frac{1-b}{2}\right)^{k}\!b\!\right) \lesssim \frac{2^{-k}}{c^{2-\alpha}s_0^{-k}}+\sum_{\ell=1}^{k-1}\left(\frac{s_0}{2}\right)^\ell \lesssim \frac{1}{c^{2-\alpha}}\left(\frac{s_0}{2}\right)^k $$ where the implied constants depend again only on $\alpha$. We should note here that the summand with $\ell=k$ is the dominant one in the above inequality.
Now we consider the left hand inequality. We have that $ \ddot{R}\ddot{\omega}\left(\!a^k_1\!+\!c\!\left(\!\frac{1-b}{2}\!\right)^{\!k}\!b\right) $ equals \begin{equation}\label{varR} \ddot{R}\ddot{\omega}\mathbf{1}_{I^{k+1}_1} \left(\!a^k_1\!+\!c\!\left(\!\frac{1-b}{2}\!\right)^{\!k}\!b\right) +\sum_{\ell=1}^{k+1}\ddot{R}\ddot{\omega}\mathbf{1}_{I^\ell_2}\left(\!a^k_1\!+\!c\!\left(\!\frac{1-b}{2}\!\right)^{\!k}\!b\right) \end{equation} and following the argument for the previous inequality we see that $$
\left|\sum_{\ell=1}^{k+1}\ddot{R}\ddot{\omega}\mathbf{1}_{I^\ell_2}\left(\!a^k_1\!+\!c\!\left(\!\frac{1-b}{2}\!\right)^{\!k}\!b\right)\right|\leq A\left(\frac{s_0}{2}\right)^k $$ where $A$ depends only on $\alpha$ but not on $c$. The first summand of (\ref{varR}) gives \begin{eqnarray*} \int_{I_1^{k+1}}\frac{d\ddot{\omega}(y)}{\left(a^k_1\!+\!c\left(\frac{1-b}{2}\right)^{k}\!b\!-\!y\right)^{2-\alpha}} &\geq& \sum_{\ell=k+1}^\infty\frac{2^{-\ell-1}}{\left(\left(\frac{1-b}{2}\right)^\ell+c\left(\frac{1-b}{2}\right)^{k}b\right)^{2-\alpha}} \end{eqnarray*} \begin{eqnarray*} &\hspace{4.3cm}\approx& \frac{s_0^k}{2^k}\sum_{\ell=k+1}^\infty\frac{2^{-\ell+k-1}}{\left(\left(\frac{1-b}{2}\right)^{\ell-k}\!+\!cb\right)^{2-\alpha}}\\ &\hspace{4.3cm}=& \frac{s_0^k}{2^k}\sum_{\ell=1}^\infty\frac{2^{-\ell-1}}{\left(\left(\frac{1-b}{2}\right)^{\ell}\!+\!c b\right)^{2-\alpha}}. \end{eqnarray*} Choosing $c$ small enough not depending on $k$ (since the last sum does not depend on $k$), we obtain $$ \int_{I_1^{k+1}}\frac{d\ddot{\omega}(y)}{\left(a^k_1\!+\!c\left(\frac{1-b}{2}\right)^{k}\!b\!-\!y\right)^{2-\alpha}} \geq C_1\left(\frac{s_0}{2}\right)^k $$ with $C_1>2A$ and we conclude our lemma. \end{proof} \begin{proof}[Proof of Theorem \ref{Riesz}] Set $\dot{z}^k_j=a^k_j\!+\!cb\left(\!\frac{1-b}{2}\right)^{k}$ and define the measure $\displaystyle \dot{\sigma}=\sum_{k,j}s^k_j\delta_{\dot{z}^k_j} $ where $s_j^k=\left(\frac{2}{s_0^2}\right)^k$ as before. Following verbatim the calculations of Theorem \ref{fracint}, one can show that $ \ddot{\mathcal{A}}_2(\dot{\sigma},\ddot{\omega})<\infty. $ Now define the measures $\omega$ and $\sigma$, as before, for any measurable set $E\subset\mathbb{{\mathbb R}}^2$ by
$$ \omega(E)=\sum_{n=0}^\infty \ddot{\omega}_n(E)\ \ \text{and}\ \ \sigma(E)=\sum_{n=0}^\infty \dot{\sigma}_n(E) $$ where $\dot{\sigma}_0(E)=\dot{\sigma}([E\cap (I_1^0\times\{\gamma_0\})]_x)$, and $\dot{\sigma}_n$ are copies of $\dot{\sigma}_0$ at the intervals $[a_n,a_n+1]\times \{\gamma_n\}$, and where the height $\gamma_n$ will be determined later. Again, as before, it is easy to see that both $\mathcal{A}_2^\alpha$ and $\mathcal{A}_2^{\alpha,*}$ and both $\mathcal{E}^\alpha_2$ and $\mathcal{E}^{\alpha,*}_2$ are bounded. Let us now finish the proof by showing that the off-testing constant for the Riesz transforms are unbounded. From Lemma \ref{lemma} we have $ \ddot{R}\ddot{\omega}(\dot{z}_j^k)\gtrsim \left(\frac{s_0}{2}\right)^k $ which implies \begin{eqnarray} \int_{I_1^0}\left(\ddot{R}(\mathbf{1}_{I_1^0}\ddot{\omega})(x)\right)^2d\dot{\sigma}(y) \gtrsim \sum_{k=1}^ \infty\sum_{j=1}^{2^k}s^k_j\cdot \left(\frac{s_0}{2}\right)^{2k} = \label{test_R_infty} \sum_{k=1}^\infty\sum_{j=1}^{2^k}\frac{1}{2^k}=\infty. \end{eqnarray} Now choose the cube $Q_n=[a_n,a_n+1]\times[0,-1]$. Then, \begin{eqnarray*} \mathcal{R}_{1,\textit{off},\alpha}^2 &\geq&
\frac{1}{\omega (Q_n)}\int_{Q_n^c}\bigg[\int_{Q_n}\!\frac{(x_1-y_1)d\omega (y)}{|x-y|^{3-\alpha}}\bigg]^2\!d\sigma (x)\\ &\geq& \frac{1}{\omega(Q_n)} \int_{I_1^0}\bigg[\int_{I_1^0}\frac{(x_1-y_1)d\ddot{\omega} (y_1)}{\sqrt{(x_1-y_1)^2+\gamma_n^2}^{3-\alpha}}\bigg]^2\!d\dot{\sigma}(x_1) = \frac{n}{\omega(Q_n)} \end{eqnarray*} by choosing the height $\gamma_n$ so that $\int_{I_1^0}\!\!\bigg[\!\int_{I_1^0}\frac{(x_1-y_1)d\ddot{\omega} (y_1)}{\sqrt{(x_1-y_1)^2+\gamma_n^2}^{3-\alpha}}\bigg]^2\!d\dot{\sigma}(x_1)=n$ by (\ref{test_R_infty}). Letting $n\rightarrow\infty$, we see that the off-testing constant is unbounded. \end{proof} \textbf{Acknowledgement:} We would like to thank professors Eric Sawyer and Ignacio Uriarte-Tuero for edifying discussions and their useful suggestions.
\end{document}
|
arXiv
|
monovacancy-neutral-formation-free-energy-crystal-npt
Property Definition (short name)
A common way to refer to the Property Definition. Note there may be multiple Property Definitions with the same short name, to fully distinguish between Property Definitions the full Tag URI must be used (the Property Definition ID).
Property Definition ID
The full Property Definition identifier using the Tag URI scheme.
tag:[email protected],2015-07-28:property/monovacancy-neutral-formation-free-energy-crystal-npt
A brief one-sentence description of this Property Definition.
Formation free energy of a neutral monovacancy in a general crystal at finite temperature and stress
A description about this Property Definition.
Gibbs free energy of formation of a neutral monovacancy in a (possibly multispecies) infinite host crystal lattice at a specific temperature and stress state relative to a given infinite monoatomic reference lattice ('reservoir') at a possibly different temperature and stress state.
The user or organization who initially contributed this Property Definition.
jl2922
The user or organization who currently maintains this Property Definition.
The date the Property Definition was "minted", based on its Tag URI.
Content on GitHub
The following content may be available on GitHub: Property Definition (an EDN file containing the Property Definition Keys listed below); Physics Validator (a script provided by the user for validating that an instance of the Property is physically valid); Property Documentation Wiki (the contents of the Wiki displayed at the bottom of this page).
Property Definition
Physics Validator
Property Documentation Wiki
See the KIM Properties Framework for more detailed information.
Jump below to Property Documentation Wiki content
Property Definition Keys
formation-free-energy
host-a
host-alpha
host-b
host-beta
host-c
host-cauchy-stress
host-gamma
host-removed-atom
host-space-group
host-temperature
host-wyckoff-coordinates
host-wyckoff-multiplicity-and-letter
host-wyckoff-species
reservoir-a
reservoir-alpha
reservoir-b
reservoir-beta
reservoir-c
reservoir-cauchy-stress
reservoir-gamma
reservoir-space-group
reservoir-temperature
reservoir-wyckoff-coordinates
reservoir-wyckoff-multiplicity-and-letter
reservoir-wyckoff-species
host-short-name
reservoir-cohesive-free-energy
reservoir-short-name
has-unit
The Gibbs free energy of formation associated with extracting the 'host-removed-atom' from the host crystal at the specified temperature and stress and adding it to a reservoir crystal at a possibly different temperature and stress.
The average length of the host crystal unit cell vector <a>. The associated direction must correspond to the first component of the entries of 'host-wyckoff-coordinates'. The triad (<a>,<b>,<c>) must form a right-handed system.
The average angle between the host crystal unit cell vectors <b> and <c>. Must be strictly greater than zero and strictly less than 90 degrees.
The average length of the host crystal unit cell vector <b>. The associated direction must correspond to the second component of the entries of 'host-wyckoff-coordinates'. The triad (<a>,<b>,<c>) must form a right-handed system.
The average angle between the host crystal unit cell vectors <a> and <c>. Must be strictly greater than zero and strictly less than 90 degrees.
The average length of the host crystal unit cell vector <c>. The associated direction must correspond to the second component of the entries of 'host-wyckoff-coordinates'. The triad (<a>,<b>,<c>) must form a right-handed system.
The [xx,yy,zz,yz,xz,xy] (i.e. [11,22,33,23,13,12]) components of the Cauchy stress acting on the periodic cell of the host crystal. The orthonormal basis (<e_1>,<e_2>,<e_3>) used to express the stress should be such that e_1 is in the direction of <a>, e_2 is in the direction of (<c> x <a>), and e_3 is in the direction of (<e_1> x <e_2>). The expected form should be [d e f r s t].
The average angle between the host crystal unit cell vectors <a> and <b>. Must be strictly greater than zero and strictly less than 90 degrees.
The index of the Wyckoff site corresponding to the atom being removed from the host lattice. This value refers to the ordering in 'host-wyckoff-multiplicity-and-letter' and ranges from one to the number of unique Wyckoff sites in the host crystal. The species of the atom being removed should match the species of the (monoatomic) reservoir crystal.
Hermann-Mauguin designation for the space group associated with the symmetry of the host crystal (e.g. Immm, Fm-3m, P6_3/mmc).
Temperature of the host crystal.
[":" 3]
Coordinates of the unique Wyckoff sites needed to generate the host lattice from its fully symmetry-reduced description, given as fractions of the host crystal lattice vectors. The origin used to specify the Wyckoff coordinates is assumed to correspond to the standard/default setting (see http://www.cryst.ehu.es/cgi-bin/cryst/programs/nph-def-choice). The order of elements in this array must correspond to the order of the entries listed in 'host-wyckoff-multiplicity-and-letter' and 'host-wyckoff-species'.
[":"]
Multiplicity and standard letter of the unique Wyckoff sites (e.g. 4a, 2b) needed to generate the host lattice from its fully symmetry-reduced description. The order of elements in this array must correspond to the order of the entries listed in 'host-wyckoff-coordinates' and 'host-wyckoff-species'.
The element symbol of the atomic species of the unique Wyckoff sites used to generate the host crystal from its fully symmetry-reduced description. The order of the entries must correspond to the order of the entries in 'host-wyckoff-coordinates' and 'host-wyckoff-multiplicity-and-letter'.
The average length of the reservoir crystal unit cell vector <a>. The associated direction must correspond to the first component of the entries of 'reservoir-wyckoff-coordinates'. The triad (<a>,<b>,<c>) must form a right-handed system.
The average angle between the reservoir crystal unit cell vectors <b> and <c>. Must be strictly greater than zero and strictly less than 90 degrees.
The average length of the reservoir crystal unit cell vector <b>. The associated direction must correspond to the second component of the entries of 'reservoir-wyckoff-coordinates'. The triad (<a>,<b>,<c>) must form a right-handed system.
The average angle between the reservoir crystal unit cell vectors <a> and <c>. Must be strictly greater than zero and strictly less than 90 degrees.
The average length of the reservoir crystal unit cell vector <c>. The associated direction must correspond to the second component of the entries of 'reservoir-wyckoff-coordinates'. The triad (<a>,<b>,<c>) must form a right-handed system.
The [xx,yy,zz,yz,xz,xy] (i.e. [11,22,33,23,13,12]) components of the Cauchy stress acting on the periodic cell of the reservoir crystal. The orthonormal basis (<e_1>,<e_2>,<e_3>) used to express the stress should be such that e_1 is in the direction of <a>, e_2 is in the direction of (<c> x <a>), and e_3 is in the direction of (<e_1> x <e_2>). The expected form should be [d e f r s t].
The average angle between the reservoir crystal unit cell vectors <a> and <b>. Must be strictly greater than zero and strictly less than 90 degrees.
Hermann-Mauguin designation for the space group associated with the symmetry of the reservoir crystal (e.g. Immm, Fm-3m, P6_3/mmc).
Temperature of the reservoir crystal.
Coordinates of the unique Wyckoff sites needed to generate the reservoir lattice from its fully symmetry-reduced description, given as fractions of the reservoir crystal lattice vectors. The origin used to specify the Wyckoff coordinates is assumed to correspond to the standard/default setting (see http://www.cryst.ehu.es/cgi-bin/cryst/programs/nph-def-choice). The order of elements in this array must correspond to the order of the entries listed in 'reservoir-wyckoff-multiplicity-and-letter' and 'reservoir-wyckoff-species'.
Multiplicity and standard letter of the unique Wyckoff sites (e.g. 4a, 2b) needed to generate the reservoir lattice from its fully symmetry-reduced description. The order of elements in this array must correspond to the order of the entries listed in 'reservoir-wyckoff-coordinates' and 'reservoir-wyckoff-species'.
The element symbol of the atomic species of the unique Wyckoff sites used to generate the reservoir crystal from its fully symmetry-reduced description. By convention, we take the reservoir to be monoatomic and to be of the same species as the atom removed to introduce the monovacancy.
Short name describing the host crystal type (e.g. fcc, bcc, diamond).
The cohesive free energy (negative of the potential energy per atom) of the reservoir crystal under the specified temperature and stress conditions.
Short name describing the reservoir crystal type (e.g. fcc, bcc, diamond).
This property is intended to quantify the free energy of formation of a monovacancy in an infinite host crystal at a specific stress state and non-zero temperature. If the energy of formation at zero temperature is desired, please use either the 'monovacancy-neutral-relaxed-formation-potential-energy-crystal-npt' or 'monovacancy-neutral-unrelaxed-formation-potential-energy-crystal-npt' properties as appropriate. The host crystal may be monoatomic or multiatomic and may have any lattice geometry which is in a state of thermodynamic equilibrium at the given temperature and stress state. This geometry is defined in terms of 1) three independent lengths and three independent angles which describe the lattice formed by the time-averaged positions of the constituent atoms and 2) the unique Wyckoff positions and corresponding space group. The single atom which is removed from the host crystal to form the monovacancy is specified by its position in the list of Wyckoff sites which is provided. This also indicates its species and coordinates since the order of the elements in 'host-wyckoff-multiplicity-and-letter', 'host-wyckoff-species', and 'host-wyckoff-coordinates' all correspond to one another. The quantity contained in the 'formation-free-energy' key is the formation free energy of a vacancy of designated species (denoted in this documentation as "species $$v$$") in the host crystal plus the chemical potential of a monoatomic reference "reservoir" crystal of the same species as the atom being removed to introduce the vacancy. The reservoir crystal is also an infinite, periodic structure and may be at a stress state and non-zero temperature which possibly differ from that of the host crystal so long as it is in thermodynamic equilibrium. The expression for the free energy of formation is then $$E^f_v = E_{\mathrm{vac},v}(T_\mathrm{host},\sigma_\mathrm{host}) - E_\mathrm{ideal}(T_\mathrm{host},\sigma_\mathrm{host}) + \mu_v(T_\mathrm{res},\sigma_\mathrm{res})$$ where $$E_{\mathrm{vac},v}$$ is the total Gibbs free energy of the host crystal with the vacancy of species $$v$$ at temperature $$T_\mathrm{host}$$ and stress state $$\sigma_\mathrm{host}$$, $$E_\mathrm{ideal}$$ is the total Gibbs free energy of the host crystal before the vacancy is introduced, and $$\mu_v$$ is the Gibbs free energy per atom of the reservoir crystal composed of species type $$v$$. Notionally, the energy of formation reported in this property can thus be understood as the difference between the free energy of the host crystal if a specific atom is removed from it and the free energy of the reservoir crystal if that atom was used to fill an on-lattice vacancy in it. In the case where the host crystal is monoatomic, the chemical potential is usually removed from consideration by taking the reservoir crystal to be of the same lattice geometry as the host crystal at the same temperature and stress state. As an example, suppose we were to compute the formation energy of a monovacancy in bcc Al at temperature $$T$$ and stress state $$\sigma$$, which we model with a periodic cell of 128 atoms. If the reservoir crystal is chosen as aforementioned, then the chemical potential $$\mu$$ is equal to the Gibbs free energy per atom of the ideal host crystal and we get $$E^f = E_\mathrm{vac}(T,\sigma) - E_\mathrm{ideal}(T,\sigma) + \mu(T,\sigma)$$ where $$E_\mathrm{vac}(T,\sigma)$$ is the total Gibbs free energy of the 127-atom system after the vacancy atom is removed, $$E_\mathrm{ideal}$$ is the total Gibbs free energy of the 128-atom system before the vacancy atom is removed, and $$\mu(T,\sigma)$$ is the cohesive free energy per atom. Thus, we may write $$\mu(T,\sigma) = E_\mathrm{ideal}(T,\sigma)/128$$, and $$ \begin{align*} E^f = & E_\mathrm{vac}(T,\sigma) - E_\mathrm{ideal}(T,\sigma) + E_\mathrm{ideal}(T,\sigma)/128 \\ = & E_\mathrm{vac}(T,\sigma) - 127*E_\mathrm{ideal}(T,\sigma)/128 \end{align*} $$ However, when the host crystal is not monoatomic or the reservoir crystal is chosen to be different from the host crystal, either in its geometry or thermodynamic state, it is essential to retain the chemical potential term. References: [1] T. Maeda, T. Wada. *J. Phys. Chem. Solids* **66** (2005) 1924-1927. [2] T. Maeda, S. Nakamura, T. Wada. *Jpn. J. Appl. Phys.* **50** (2011) 04DP07. [3] M. Posselt, F. Gao, W.J. Weber, V. Belko. *J. Phys.: Condens. Matter* **16** (2004) 1307. [4] S. Zhang, J. Northup. *Phys. Rev. Lett.* **67**, No. 17 (1991) 2339.
Login to edit Wiki content
|
CommonCrawl
|
The intolerance to functional genetic variation of protein domains predicts the localization of pathogenic mutations within genes
Ayal B. Gussow ORCID: orcid.org/0000-0002-0142-60841,2,
Slavé Petrovski1,3,
Quanli Wang1,
Andrew S. Allen4 &
David B. Goldstein1
Genome Biology volume 17, Article number: 9 (2016) Cite this article
Ranking human genes based on their tolerance to functional genetic variation can greatly facilitate patient genome interpretation. It is well established, however, that different parts of proteins can have different functions, suggesting that it will ultimately be more informative to focus attention on functionally distinct portions of genes. Here we evaluate the intolerance of genic sub-regions using two biological sub-region classifications. We show that the intolerance scores of these sub-regions significantly correlate with reported pathogenic mutations. This observation extends the utility of intolerance scores to indicating where pathogenic mutations are mostly likely to fall within genes.
We previously introduced the Residual Variation Intolerance Score (RVIS) [1], a framework that ranks protein-coding genes based on their intolerance to functional variation, by comparing the overall number of observed variants in a gene to the observed common functional variants. The basic idea behind this approach is the same as that behind approaches using phylogenetic conservation that rank genes by the degree to which they are evolutionarily conserved, except using standing human genetic variation to identify genes in which functional variation is strongly selected against and thus likely to be deleterious. This approach proved successful in prioritizing genes most likely to result in Mendelian disease [1]. Using the gene as the unit of analysis however fails to represent the reality that pathogenic mutations can often cluster in particular parts of genes.
Table 1 AIC comparisons of different sets of predictors
While there are many approaches that assess various characteristics of variants [2–4] which can in turn be used to try and determine whether or not a variant is likely to be pathogenic, current approaches to the problem of localizing pathogenic variants within sub-regions of a gene rely heavily on conservation to define important boundaries. The thought behind this is that more conserved regions within a gene are more likely to contain pathogenic variants. Another option to define genic sub regions is to utilize the functional information about the corresponding protein from databases of manually annotated proteins, such as Swiss-Prot [5]. In fact, some variant level predictors, such as MutationTaster [2], take these data into account when they are available. However, while ideally an approach that focused on parts of proteins would use divisions that correspond to functionally distinct parts of proteins, this information is not yet comprehensively available.
Here, we take a first step at an approach to divide the gene into sub-regions and rank the resulting sub-regions by their intolerance to functional variation. We use two divisions as surrogates for functionally distinct parts of the protein. The first is a division into protein domains, defined by sequence homology to known conserved domains. The second is a division into exons, reflecting that a gene can encode different isoforms of the protein using different exonic configurations.
For the protein domain division, we annotate each gene's protein domains based on the Conserved Domain Database (CDD) [6], a collection of conserved domain sequences. The coding region of each gene was aligned to the CDD. The final domain coordinates for each gene were defined as the regions within the gene that aligned to the CDD and the unaligned regions between each CDD alignment.
Following this, we sought to create a ranking of the resulting sub-regions that would reflect their intolerance to functional variation. One common approach to this is to rank stretches of sequence by their phylogenetic conservation [7]. However, relying on conservation alone can fail to capture human specific constraint. Thus, we used the RVIS approach introduced in [1] to rank these regions solely based on human polymorphism data. We therefore generated the RVIS as described in [1], but now applied to the sequence stretches encoding the protein domains as the unit of analysis. As in [1], the scores were generated based on the NHLBI Exome Sequencing Project (ESP) exome variant calls [8]. This resulted in a genome-wide ranking of all domain encoding regions. To reflect its focus on sub-regions of genes, we term this overall approach sub-region Residual Variation Intolerance Scores (subRVIS). The subRVIS scores derived from this particular division into protein domains are designated domain subRVIS. We then repeated this for the exonic division, generating a set of exonic scores termed exon subRVIS. Following the original RVIS formulation, a lower subRVIS score indicates a more intolerant region.
As this approach is solely based on variation in the human population, we also constructed comparable conservation-based scores for both gene sub-divisions. We based our conservation approach on GERP++ [7], a method that assigns each genomic position a score denoting its estimated evolutionary constraint. In this approach, for each sub-region we calculate the average GERP++ score across its bases. We term this approach subGERP. We applied subGERP to the domain regions, and term the resulting scores domain subGERP. We repeated this for the exonic coordinates, and term the resulting scores exon subGERP. A higher subGERP score indicates an overall more conserved region.
To assess these scores, we developed a model for testing the utility of these scores in predicting the presence of previously reported pathogenic variants within these sub-regions. We show that domain subRVIS, domain subGERP, exon subRVIS, and exon subGERP are all significantly correlated with the presence or absence of pathogenic mutations within their corresponding regions. Further, we show that by dividing the gene into sub-regions we add useful information beyond the undivided genic RVIS score.
Region definitions and score generation
We defined each gene's protein-coding region based on the consensus coding sequence project (CCDS) [9]. We divided these regions into domains based on the CDD [6] (Methods). The CDD is a collection of conserved domain sequences, represented as position-specific score matrices (PSSMs). Each gene's coding sequence was aligned to CDD, using RPS-BLAST. In total, we annotated 8,988 different types of domains in 16,611 genes, covering 41.5 % of coding regions. The final domain coordinates for each gene were defined by both the regions of the coding sequence that aligned to the CDD and the unaligned regions between CDD alignments. These coordinates are available in Additional file 1. Using these coordinates, there are 89,522 regions in total, an average of five regions per gene. We calculated intolerance scores for these regions using the approach described in [1] and designated these scores domain subRVIS. As the division into exons is biologically relevant, in particular with respect to the splicing machinery, we also generated subRVIS scores whereby each exon constitutes a region, and termed these exon subRVIS (Additional file 2).
Score assessment frameworks
Following the generation of the scores, we developed two frameworks to test how well regional intolerance scores can predict the distribution of known pathogenic variants in disease-associated genes (Methods). We use information for reported pathogenic variants from two large databases: ClinVar [10] (accessed June 2015) and the human gene mutation database (HGMD) [11] (release 2015.1). It is likely that in some cases only a portion of the gene was sequenced to detect pathogenic variants reported in these databases. This is unlikely to affect the results of our test because even in such scenarios, at the time of sequencing it was unknown which regions had more intolerant subRVIS scores, and thus the sequencing efforts would not be preferentially biased towards sequencing of subRVIS intolerant regions. We also limited the reported pathogenic variants to missense variants that were not adjacent to a canonical splice site (within one codon) and not predicted to cause loss of function (LoF) (Methods), as LoF variants, with some exceptions, generally damage the function of the entire protein indiscriminative of where within the protein they occur.
Our first framework is a gene-by-gene assessment of whether regional intolerance scores can predict the distribution of known pathogenic variants within each gene (Methods). Given that some genes have very few reported pathogenic variants or very few sub-regions, and that this test is applied separately to each gene, this test has limited power to detect significance. We limited our dataset to genes with at least two regions and at least one reported pathogenic variant (2,888 genes in domain subRVIS, 2,910 genes in exon subRVIS; Additional file 3 and Additional file 4).
Our second framework tests whether regional intolerance scores can predict the presence of known pathogenic variants on a genome-wide scale. This test is limited to the subset of genes for which we have reported pathogenic variants (3,046 genes; Additional file 5). Though this assessment is limited by our test data and does not cover all genes, this assessment can be used as an indicator to how well we expect regional intolerance scores to predict the overall distribution of pathogenic variants, and not just those that have been reported in our test data.
Gene-specific testing
For the gene-specific testing, we focused our analyses on genes with at least two regions and at least one reported pathogenic variant. In domain subRVIS, we were able to assess 2,888 genes (Additional file 3). For 182 of the 2,888 genes (6.3 %), we found a significant relationship between domain subRVIS and the distribution of pathogenic variants (α = 0.05, false discovery rate (FDR); Additional file 3).
We ran the same assessment across the exon subRVIS scores. Here, we were able to assess 2,910 genes (Additional file 4). For 102 of the 2,910 genes (3.5 %), we found a significant relationship between exon subRVIS and the distribution of pathogenic variants (α = 0.05, FDR; Additional file 4).
For these 182 genes where domain subRVIS predicts where mutations are found, there are many different patterns represented. In some cases, genes are somewhat evenly divided in more and less tolerant regions. One example in this category is the ATP1A3 gene (Fig. 1). Overall, ATP1A3 is a highly intolerant gene [1], which has been previously implicated with alternating hemiplegia of childhood and rapid-onset dystonia–parkinsonism [12, 13]. It falls in the 3rd percentile of the overall genic intolerance scores. ATP1A3 has roughly two intolerance levels. The two most intolerant regions have intolerance scores of just below –1. These regions occupy 43 % of ATP1A3's coding region. The remaining regions are more tolerant, with scores ranging from –0.573 to 0.144, with an average score of –0.145. Interestingly, these more tolerant regions carry far less previously identified pathogenic mutations (Fig. 1).
Distribution of reported pathogenic variants in ATP1A3. This figure shows the distribution of reported variants in ATP1A3. Each CDD conserved domain type is annotated in a different color. The Y axis represents the domain subRVIS scores. Each reported variant is marked with a blue circle
Some genes however show a far more extreme pattern. Overall, the MAPT gene (Fig. 2) is highly tolerant (98th percentile) despite carrying mutations that cause frontotemporal dementia [11, 14]. In fact, a small proportion (26 %) is very intolerant relative to the majority of the gene.
Distribution of reported pathogenic variants in MAPT. This figure shows the distribution of reported variants in MAPT. Each CDD conserved domain type is annotated in a different color. The Y axis represents the domain subRVIS scores. Each reported variant is marked with a blue circle
Strikingly, nearly all the reported pathogenic MAPT variants fall within two small intolerant sub-regions of MAPT. The third region is tolerant to variation, and therefore driving the overall genic intolerance score up despite the clear presence of a portion of the gene that causes disease when mutated. Though a fraction of the reported pathogenic variants is from publications that only sequenced exons falling in the two intolerant sub-regions [11], even when those variants are discounted the gene-specific test P value for MAPT remains unchanged and enrichment of reported pathogenic variants falling in the two intolerant regions remains clear (Fig. 2, FDR P value: 0.002).
Genome-wide testing
Encouraged by the fact that subRVIS can, for at least some genes, clearly predict where within disease-associated genes pathogenic mutations are found, we sought to assess the genome-wide prediction of the regional intolerance scores. To this end, we implemented a logistic regression model to test how well regional intolerance scores can predict the presence of reported pathogenic variants within each sub-region genome-wide. For each set of intolerance scores tested, we also generated and tested 1,000 negative test sets. The comparison of the true P value to the negative set P values is termed the resampling P value (Methods). We restricted the test to the subset of genes that carry reported pathogenic variants (3,046 genes, Additional file 5). Chromosomes Y and MT were not assessed.
Overall, we found domain subRVIS to be predictive of the presence of pathogenic variants within domain encoding regions (P value: 1 × 10–5, resampling P value: 0.001, score effect size: –0.08, 3,046 genes).
We further wanted to verify that we were not recapturing the overall genic RVIS scores and confirm that dividing the gene into domain sub-regions is indeed adding information. We created a score vector in which each domain sub-region is assigned its gene's overall genic RVIS score in place of its localized domain subRVIS score. We assessed this genic score vector across the subset of genes for which we have both subRVIS and RVIS scores (Additional file 5), and found that while genic RVIS is not predictive in this framework, subRVIS remains predictive for the subset of genes for which we have both subRVIS and genic RVIS scores (genic RVIS P value: 0.137, resampling P value: 0.142, score effect size: 0.03, 2,874 genes; domain subRVIS P value: 0.0002, resampling P value: 0.01, score effect size: –0. 07, 2,874 genes).
To assess the relationship between domain subRVIS and phylogenetic conservation, we similarly constructed a conservation score for each domain (domain subGERP). The Pearson's correlation coefficient between domain subGERP and domain subRVIS is –0.204 (P value: <2.2 × 10–16; 95 % confidence interval: (–0.21, –0.197)). We found that domain subGERP is also predictive in our testing framework (P value: 3.5 × 10–6, resampling P value: 0.001, score effect size: 0.09, 3,046 genes). Furthermore, in a joint model with both domain subRVIS and domain subGERP, we found that both domain subRVIS and domain subGERP remain predictive (domain subRVIS P value: 0.0003, score effect size: –0.07; domain subGERP P value: 8.7 × 10–5, score effect size: 0.07; 3,046 genes), indicating that both scores add significant independent information about the localization of pathogenic variation.
To formally test the contribution of these different scores, we calculated and compared the Akaike information criterion (AIC) of the above model, using different predictor combinations of the two significant scores (subRVIS and subGERP). Based on these comparisons neither subRVIS nor subGERP appear to be a significantly stronger predictor (Table 1). We further found that including both subRVIS and subGERP minimizes information loss beyond subGERP alone (P value: 0.004) and beyond subRVIS alone (P value: 0.001).
The results above show that analyzing the intolerance of regions of genes corresponding to protein domains can provide significant information of where disease causing mutations are likely to be found. However, this does not itself indicate that the use of the protein domains adds information. In fact, it could be the case that any sub-divisions of genes of similar size to protein domains would allow such prediction. More generally, of course, it could be that other ways of sub-dividing genes could be even more informative.
To explore these questions, we first tried to determine whether dividing the genes into biological domains adds information beyond dividing genes into similarly sized sub-regions without biological information. We permuted the CDD domains within each gene randomly, and generated domain subRVIS scores for each of these random permutations (Methods). We then assessed the prediction of these scores using the same model we used for the original scores. We repeated this 100 times, and created a distribution of the effect sizes of subRVIS scores across the random permutations. While we expect that many, or possibly all, of these divisions will be significantly predictive, we sought to test whether we have significantly more information in the biological division. The domain division score effect size has a larger absolute value than 99 out of the 100 permuted division score effect sizes. Thus, incorporating biological information does seem to contribute useful information beyond simply considering randomly assigned parts of the gene, when the units correspond in size to protein domains (permutation P value: 0.02; Additional File 6: Figure S1).
Next, we sought to explore whether the exonic division might do better than the division into regions of genes corresponding to protein domains. With the exonic division, Pearson's correlation coefficient between exonic subRVIS and exonic subGERP is -0.126 (p-value: <2.2 × 10-16; 95 % confidence interval: [-0.130, -0.121]). We found that using intolerance scores at the level of the exon is also predictive of the presence of pathogenic variants within exons (p-value: 0.0001, resampling p-value: 0.001, score effect size: -0.04, 3046 genes). Further, we found that exon subGERP is also predictive in this framework (p-value: 7.8 × 10-16, resampling p-value: 0.001, score effect size: 0.09, 3046 genes).
Similar to our analysis of domain encoding regions, this does not itself indicate that the use of the biological parts is actually adding information – it could be that simply considering parts of genes would allow comparable performance. Thus, just as we did with domains, we generated 100 sets of coordinates in which we permuted the exons within each gene randomly and generated exonic subRVIS scores for these shuffled exons (Methods). We then assessed the prediction of these scores. The exon division score effect size has a smaller absolute value than all but one of the permuted division score effect sizes. Thus, it appears that incorporating the exonic information may not contribute useful information beyond other divisions in which the units correspond in size to exons (Additional File 6: Figure S2).
Examining the relationship with variant level scores
Although the subRVIS approach does not constitute a variant level predictor, we wanted to verify that the information we were gaining from subRVIS was independent from the information already available from commonly used variant level predictors. Thus, we sought to explore the relationships between domain subRVIS and three variant level predictors: MutationTaster [2], PolyPhen-2 [4], and CADD [3]. We based our assessment on a set of 250,000 simulated variants (Additional file 7) within the domain subRVIS coordinates. The full results of this assessment can be found in Additional file 8.
We found that neither PolyPhen-2 nor CADD strongly correlated with domain subRVIS, with Pearson's correlation coefficients of –0.0548 and –0.0811, respectively. The negative correlation is expected, as lower subRVIS scores indicate more intolerant regions and higher PolyPhen-2 or CADD scores indicate more damaging variants.
We converted MutationTaster's predictions into scores on a scale of 0 to 1, with 0 corresponding to predicted pathogenic and 1 corresponding to predicted non-pathogenic (Methods). The Pearson's correlation coefficient with the MutationTaster scores is 0.159. Thus, domain subRVIS and MutationTaster are correlated to a higher degree than domain subRVIS and either of the other two scores (PolyPhen-2 and CADD). This is not unexpected, as MutationTaster does consider protein domain information when it is available.
Given this overlap, we sought to explore the relationship between the MutationTaster predictions and domain sub-regions where pathogenic variants have been previously reported. We divided the MutationTaster variant scores based on whether the corresponding simulated variant falls in a domain sub-region that has previously reported at least one pathogenic variant (n = 20,382 simulated variants) or not (n = 228,116 simulated variants). We compared these two distributions of scores and found that they differ (P value: 2.48 × 10–256, Wilcoxon rank sum test), with lower (more likely pathogenic) variant level MutationTaster scores corresponding to variants identified in disease-associated regions.
Application to patient data
We have previously shown that neuropsychiatric case populations show an enrichment of de novo mutations that have a damaging (≥0.95) PolyPhen-2 score [4] and occur in an intolerant (≤25th percentile) genic RVIS gene when compared to de novo mutations found among controls [1]. The idea behind this multi-tiered approach that includes both variant level and regional level prioritizations is that the interpretation of a variant's effect is more informative when it is known if the variant falls in a region that is depleted of functional variation. Thus, if a variant is likely damaging to the protein and also affects an intolerant gene, it is more likely to be pathogenic and therefore we expect an enrichment of these in cases. We designated mutations fitting these criteria as 'hot zone' mutations.
We applied this approach using the subRVIS scores in place of the genic RVIS scores on two case datasets: de novo mutations in autism [15] and in epileptic encephalopathies [16]. For controls, we used the controls provided in [15] as controls for both sets of cases (Methods).
We found that within the epilepsy cohort, 77 out of 366 (21 %) of the de novo mutations were subRVIS hot zone mutations, while only 212 out of 1345 (16 %) of the de novo mutations in controls were (Fisher's exact test P value: 0.018). Using the same test with the genic RVIS scores gave a stronger signal (Fisher's exact test P value: 0.001). In the autism cohort, we found a strong signal for genic RVIS (Fisher's exact test P value: 0.0001) and an insignificant subRVIS score signal (Fisher's exact test P value: 0.275).
Despite the fact that, for now, genic RVIS is still more predictive, we are encouraged by the fact that we can still detect hot zone de novo mutation enrichment with a focus on sub-regions. This is especially impressive given the smaller size of the subRVIS regions. As the number of available control reference cohorts grows, the resolution of the subRVIS score is anticipated to improve.
Despite its introduction only 2 years ago, it is already clear that consideration of genic intolerance provides a valuable new dimension in the interpretation of patient genomes. Intolerance scores have been used repeatedly to interpret observations of mutations in patients with unresolved or undiagnosed diseases [17–22] and have been used to interpret de novo mutations across a broad range of diseases [23–26].
Despite this promise, the gene as the unit of analysis is coarse. Here we have shown that sub-dividing genes into regions corresponding to protein domains or regions of the size of exons can provide significant information about where in disease causing genes pathogenic mutations are most likely to be found. There are a number of ways we expect this added resolution to be useful in interpreting genomes. An obvious example is to focus attention on mutations that occur in genes that are not intolerant overall, but that occur in a particularly intolerant region of the gene. The reverse pattern is also important. One of the most challenging aspects of the interpretation of personal genomes today is the high percentage of false positive mutations in disease databases, and the fact that these mutations clearly have higher population allele frequencies than the true positives. It is very likely that these false positive mutations are preferentially drawn from the more tolerant regions of genes that cause disease, giving us a possible new pointer to candidate false positive mutations.
To be alert to such possibilities, we have created an online tool for plotting variants across domain sub-regions within a gene (www.subrvis.org). This tool can help researchers explore which domains within a gene their variants of interest fall in and what the corresponding subRVIS scores are. We further constructed a score to reflect the degree to which genes vary in intolerance among their regions. The expectation is that in some genes the intolerance to variation will be uniform across its sub-regions, while in others the intolerance to variation will vary greatly across its sub-regions. Thus, this score was constructed by calculating, per gene, the standard deviation of its domain subRVIS scores. Only genes with at least three domain sub-regions were assessed. Though these scores can be useful in predicting whether we expect pathogenic mutations to cluster in specific sub-regions within a given gene, currently there is not a relationship between these scores and whether known pathogenic mutations actually do so. These scores are available in Additional file 9.
One important point to emphasize is that the minimum unit size that can be effective in an RVIS framework depends critically on the number of individuals that have been sequenced in the reference cohort, since the ability to distinguish different genomic regions depends on observing variation in those regions. While our research shows the utility of the basic subRVIS approach, the power of this approach will steadily increase as the number of sequenced individuals increases.
The research presented here demonstrates the importance of accounting for protein domains in human disease studies. In particular, quantifying a gene's domains' intolerance to variation has utility in identifying causal variants. We anticipate that our methodology will continue to improve as we gain access to more sequencing data. There are other ways, outside of conserved domains, to divide a protein into domains, such as tertiary structure. Future approaches can incorporate these and other annotations to divide proteins into biologically relevant sub-regions.
The software used in this publication is available on GitHub (https://github.com/igm-team/subrvis), released under the MIT license.
Defining the domains
We define each gene's protein-coding sequence based on its Consensus Coding Sequence (CCDS release 15, accessed November 2013) entry [5]. In order to avoid multiple gene definitions, genes with multiple CCDS transcripts are assigned the gene's canonical transcript, as this is the longest and most encompassing transcript. Next, each gene's CCDS entry is translated into protein sequence and aligned to CDD (version 3.11) [6] using RPS-BLAST (version 2.2.28+). No multi-domains in CDD were used, to avoid grouping multiple single domains into one sub-region in our domain definitions. Only alignments with maximal E-value of 1e-2 were considered. When two domains overlapped in the RPS-BLAST results, the domain with the better alignment score was kept.
Calculating subRVIS Scores
subRVIS scores were calculated for the domains, exons, 100 permuted domains, and 100 permuted exons. The scores were calculated using the NHLBI Exome Sequencing Project (ESP) [8], as described in [1]. Each position in each sub-region is first checked for adequate coverage. Only positions with ≥10× average coverage in the ESP were considered. ESP variant calls were further filtered to only retain variants with a 'PASS' filter status. Following this, using the ESP variant calls that qualify based on both these criteria, the tally of all variants per each sub-region is regressed against the count of common (>0.1 % minor allele frequency) non-synonymous variants in the sub-region, as per [1]. The studentized residual of each sub-region is its score. Thus, the subRVIS score quantifies the departure of the observed number of common non-synonymous variants from the expectation given the total number of variants in each genomic region. One of the outcomes of using the residuals from this regression as the score is that they are, by definition, orthogonal to the tally of all variants in their corresponding sub-region and thus orthogonal to the overall distribution of variants in that genomic region. A total of 89,335 domain subRVIS scores (Additional file 10) and 185,355 exon subRVIS scores (Additional file 11) were generated. Y and MT chromosome genes were not assessed.
Score prediction test dataset
To test the utility of subRVIS scores in predicting which regions are more likely to carry pathogenic variants, we required a database of known pathogenic variants. We combined data from ClinVar (accessed June 2015) [10] and HGMD (release 2015.1) [11], filtering for ClinVar entries labeled 'Pathogenic' and HGMD entries tagged as 'DM' (disease causing mutation).
We then ran Variant Effect Predictor (version 73) [27] and filtered for canonical variants that were labeled as 'missense_variant' and were not labeled as any of the following: 'incomplete_terminal_codon_variant', 'splice_region_variant', 'stop_gained', and 'stop_lost'.
Calculating mutation rates
As we are testing against raw counts of pathogenic mutations, we required a covariate to account for the difference in counts that are due to sequence mutability and unrelated to intolerance. For this, we calculated the mutation rates for each sub-region (Additional file 10 and Additional file 11) based on its sequence composition [23]. This calculation is not based on any other data used in this manuscript.
Regional score gene-specific prediction test model
To test whether the subRVIS scores are predictive at the single gene level, we designed and implemented a permutation test that predicts the distribution of reported pathogenic variants within a single gene using a set of scores. Genes with less than two regions or less than one reported pathogenic variant were not assessed.
Let n g be the number of regions in gene \( \mathit{\mathsf{g}} \). Let \( {\mathit{\mathsf{Y}}}_{\mathit{\mathsf{g}}} \) be a vector of length n g containing the counts of reported pathogenic variants across sub-regions in gene \( \mathit{\mathsf{g}} \). Let \( {\mathit{\mathsf{Z}}}_{\mathit{\mathsf{g}}} \) be a vector of length n g containing the mutation rates across sub-regions in gene \( \mathit{\mathsf{g}} \), based on sequence composition [23]. Let \( {\mathit{\mathsf{X}}}_{\mathit{\mathsf{g}}} \) be a vector of length n g containing the intolerance scores across sub-regions in gene \( \mathit{\mathsf{g}} \).
For each gene, we then calculated the expected distribution of pathogenic variants within the gene based on the sub-region mutation rates and the total number of reported pathogenic variants within the gene:
$$ {E}_{\mathsf{g}i}=\left(\sum_i^{n_g}{Y}_{\mathsf{g}i}\right)\ast \left(\frac{Z_{\mathsf{g}i}}{{\displaystyle {\sum}_i^{n_g}}{Z}_{\mathsf{g}i}}\right). $$
Thus, \( {\mathit{\mathsf{E}}}_{\mathit{\mathsf{g}}} \) is a vector of the expected number of reported pathogenic variants in each sub-region based on the gene's mutation rates and total number of reported pathogenic variants.
For each sub-region we subtracted the expected number of pathogenic variants \( \left({\mathit{\mathsf{E}}}_{\mathit{\mathsf{g}}}\right) \) from the observed number of pathogenic variants \( \left({\mathit{\mathsf{Y}}}_{\mathit{\mathsf{g}}}\right) \). This vector denotes the departure of each sub-region's tally of reported pathogenic variants from its expected tally. This vector is designated \( {\mathit{\mathsf{D}}}_{\mathit{\mathsf{g}}} \).
We calculated a score per gene, designated \( {\mathit{\mathsf{C}}}_{\mathit{\mathsf{g}}} \), denoting both the departure of the reported pathogenic variants distribution from the expectation and the relationship with the intolerance score:
$$ {C}_{\mathsf{g}}=cov\left({D}_{\mathsf{g}},{X}_{\mathsf{g}}\right). $$
We expect that a lower intolerance score will correspond with higher pathogenic variant counts than expected, and therefore we expect the covariance to be negative. To test this, we performed a permutation test, where each permutation has a different distribution of the reported pathogenic variants.
For each permutation, we drew the distribution of the reported pathogenic variants from a multinomial distribution, where the number of trials is the total tally of reported pathogenic variants for the gene and the probability for each sub-region is the fraction of the gene's mutation rate that it occupies \( \left(\frac{Z_{\mathsf{g}i}}{{\displaystyle {\sum}_i^{n_g}}{Z}_{\mathsf{g}i}}\right) \). Following this, we calculated the departure from expectation and the covariance as described above.
We repeated this n p = 20,000 times. To test how the intolerance score prediction for the true distribution of pathogenic variants compares to the prediction for the permuted distributions, we counted how many times out of the n p = 20,000 permutations the permuted covariance is smaller than or equal to the true covariance. This count is designated G. The permutation P value is calculated by the following equation: (G + 1)/(n p + 1).
Regional score genome-wide prediction test model
To test whether the regional scores are predictive at the level of the genome, we designed and implemented a model that predicts the presence or absence of reported pathogenic variants at each sub-region using a set of scores.
As the presence or absence of reported pathogenic variants in a sub-region will depend greatly on that sub-region's mutability, what we are trying to determine in this model is whether our scores can predict the presence of pathogenic variation in a sub-region after accounting for the region's mutability. We divide any score predictors (subRVIS, subGERP, genic RVIS) within this model by their standard deviation in order to allow for the interpretation of the effect sizes in terms of standard deviations.
Previously, we were considering a single gene indexed by g. Here, we are considering genome-wide analysis, therefore we dropped the g from notation. Specifically, let n be the number of sub-regions across all the genes in the genome. Let Y be a vector of length n with components (Y i , i = 1,…, n) corresponding to each sub-region taking on either a 1 or a 0, respectively, denoting presence or absence of at least one non-LoF pathogenic variant within the sub-region. Let Z be a vector of length n containing the sub-regions' mutation rates, based on sequence composition [23]. Let X be a vector of length n containing the sub-regions' intolerance scores, scaled across all sub-regions by dividing each intolerance score by the standard deviation of all the sub-regions' intolerance scores.
To evaluate the relationship between the score and the presence of pathogenic variants we fit the following logistic regression model:
$$ logit\left( \Pr \left({Y}_i=1\right)\right)=\alpha +{\beta}_1\ast \log \left({Z}_i\right)+{\beta}_2{X}_i, $$
where i=1,...,n.
Note that β 2 captures the strength of the relationship between X, the intolerance scores, and Y, while adjusting for regional mutability. We refer to β 2 as the 'score effect size' and report it in the text as a metric for how well a given model performs.
Next, we wanted to assess how the model performs on negative test sets. For each intolerance score genome-wide test we generated 1,000 resampled response vectors. To create the resampled response vectors, we resampled the 0 s and 1 s within the true response vector (Y) so that sub-regions with a larger mutation rate (Z) were more likely to be assigned a 1. Specifically, we resampled Y without replacement with sampling probabilities given by \( \frac{Z}{{\displaystyle {\sum}_i^n}{Z}_i} \). The idea is that under neutrality the more mutable a region is the more likely it is to carry mutations. By using this resampling method, we preserve the number of regions containing pathogenic variants and therefore the number of zeros and ones in our response vector remains the same.
Using the same genome-wide assessment that we used for the observed data, we tested the intolerance scores' prediction against each of the 1,000 resampled response vectors. This resulted in a vector of negative test set P values. To test how the observed set of reported pathogenic variants compares to that obtained by resampling, we enumerated, across all (R = 1,000) resampled datasets the number of times (C) the P value from the resampled data analysis is larger than the P value obtained from the observed data analysis. We define our resampling P value as: (R – C + 1)/(R + 1).
AIC comparisons
To compare AICs between two models, we first identify the model with the lower AIC, representing the model estimated to have less information loss. We designated this AICmin and we designated the other AIC as AICmax. To calculate the relative probability that AICmax is the model that minimizes information loss (designated here as p), we calculate:
$$ p= \exp \left(\frac{AI{C}_{min}- AI{C}_{max}}{2}\right). $$
The resulting value indicates that the probability that AICmax minimizes the information loss from AICmin is p. A high p indicates that AICmax may have less information loss than AICmin, while a low p indicates that it is unlikely that AICmax minimizes information loss in comparison to AICmin.
Region permutation test
To test our model on randomly permuted regions, we performed the following:
For each gene we took into account the sizes of each of its sub-regions.
We permuted the sizes of each gene's sub-regions, resulting in a set of the same sub-regions in a random order.
We then re-divided each gene based on the permuted set of sub-regions. Thus, after the permutation each gene maintains the same number and size distribution of sub-regions as in the biological division.
For this permuted set of sub-regions, we generated sub-region intolerance scores, calculated the sub-region mutation rates and counted the number of pathogenic variants in each sub-region.
Following this, we tested prediction across the permuted set of sub-regions using the same genome-wide assessment that we used for the biological division.
We recorded the effect size of the intolerance scores in this assessment.
We repeated steps (1) through (6) 100 times.
This resulted in a vector of effect sizes that constitutes our null distribution of effect sizes for the permutation test.
To test how the biological division compares to the permuted divisions, we counted how many times out of the n p = 100 permutations the absolute value of the permuted division score effect size is smaller than the absolute value of the biological division score effect size. This count is designated X. The permutation P value is calculated by the following equation: (n p − X + 1) /(n p + 1).
GERP scores
To quantify phylogenetic conservation across sub-regions, we generated a novel vector for each sub-region division that simply reflects the average GERP++ [7] score (where available) for those coordinates (Additional file 10 and Additional file 11).
MutationTaster scores
We ran MutationTaster's QueryEngine (http://www.mutationtaster.org/StartQueryEngine.html, accessed September 2015) [2] with default options, outside of the option to filter against the 1000 Genomes project. By default, this option is selected. As we wanted analysis results for all variants, we deselected this option.
MutationTaster uses a Bayes classifier to determine whether a variant is a polymorphism or disease causing. The classifier has four output options:
disease_causing: probably deleterious.
disease_causing_automatic: known to be deleterious based on existing databases.
polymorphism: probably harmless.
polymorphism_automatic: known to be harmless based on existing databases.
Along with the prediction, for each variant MutationTaster outputs an estimated probability for the prediction. More information on MutationTaster can be found at http://www.mutationtaster.org/info/documentation.html or at [2].
To convert these results into a score between 0 and 1, we devised the following criteria:
If the prediction is polymorphism, use the probability as the score. This will always be above 0.5. Thus, predicted polymorphisms receive scores in the range of 0.5 to 1.
If the prediction is disease_causing, the score is the probability subtracted from 1. As the probability will always be above 0.5, the predicted disease causing variants receive scores in the range of 0 to 0.5.
If the prediction is either polymorphism_automatic or disease_causing_automatic, this indicates that the variant's prediction is based on a database entry, not the Bayes classifier. If the Bayes classifier disagrees with the automatic prediction, the probability will be less than 0.5. In these instances, we reassigned the variant's prediction to match the Bayes classifier's and reassigned the probability to 1 minus the originally reported probability. Following this, we treated the variant as described above.
Applying the hot zone approach
For both the autism [15] and epileptic encephalopathies [16] de novo mutations data we limited to single-nucleotide variants, falling in regions for which we have both subRVIS and genic RVIS scores. We calculated each variant's PolyPhen-2 score using PolyPhen-2 HumVar [4]. Synonymous variants were assigned 0 while canonical splice, stop gain, and stop loss variants were assigned 1. Mutations present as variants in the NHLBI ESP exome variant calls [8] were excluded.
All the mutations in the epileptic encephalopathies data from the Epi4K study [16] are Sanger validated.
For the autism data from [15], we required that the mutations not be called in both siblings. We additionally required that either: (1) at least one of the institutes analyzing the data (Cold Spring Harbor Laboratory, Yale School of Medicine, University of Washington) had validated the mutation; or (2) at least one of the institutes labeled the mutation as a 'strong' variant call while no other institute labeled the mutation as 'not called' or 'weak'.
Estimate of the disease risk per protein domain
Given the potential interest in whether some CDD protein domain types are more likely to carry reported pathogenic mutations than others, we have created a table including the tally of reported pathogenic mutations and the cumulative mutation rate for each CDD protein domain type across genes (Additional file 12). This table also includes the tally divided by the cumulative mutation rate. This is meant to serve as an approximate estimate denoting the number of reported pathogenic mutations after controlling for mutation rate. For comparative purposes, a higher value indicates more reported mutations given the sequence context.
Ethical approval
No ethical approval was required.
The data and materials used in this manuscript are either previously published or are available in this publication as Additional files 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12. The software used in this publication is available on GitHub (https://github.com/igm-team/subrvis), released under the MIT license.
CDD:
Conserved domain database
FDR:
False discovery rate
HGMD:
LoF:
Loss-of-function
PSSM:
Position-specific score matrix
RVIS:
Residual Variation Intolerance Score
subRVIS:
Sub-region Residual Variation Intolerance Score
Petrovski S, Wang Q, Heinzen EL, Allen AS, Goldstein DB. Genic intolerance to functional variation and the interpretation of personal genomes. PLoS Genet. 2013;9(8):e1003709.
Article PubMed CAS PubMed Central Google Scholar
Schwarz JM, Cooper DN, Schuelke M, Seelow D. MutationTaster2: mutation prediction for the deep-sequencing age. Nat Methods. 2014;11(4):361–2.
Kircher M, Witten DM, Jain P, O'Roak BJ, Cooper GM, Shendure J. A general framework for estimating the relative pathogenicity of human genetic variants. Nat Genet. 2014;46(3):310–5.
Adzhubei IA, Schmidt S, Peshkin L, Ramensky VE, Gerasimova A, Bork P, et al. A method and server for predicting damaging missense mutations. Nat Methods. 2010;7(4):248–9.
UniProt C. UniProt: a hub for protein information. Nucleic Acids Res. 2015;43(Database issue):D204–12.
Marchler-Bauer A, Zheng C, Chitsaz F, Derbyshire MK, Geer LY, Geer RC, et al. CDD: conserved domains and protein three-dimensional structure. Nucleic Acids Res. 2013;41(Database issue):D348–52.
Davydov EV, Goode DL, Sirota M, Cooper GM, Sidow A, Batzoglou S. Identifying a high fraction of the human genome to be under selective constraint using GERP++. PLoS Comput Biol. 2010;6(12):e1001025.
Exome Variant Server, NHLBI GO Exome Sequencing Project (ESP). Seattle, WA. http://evs.gs.washington.edu/EVS/. Accessed 3rd August 2012.
Pruitt KD, Harrow J, Harte RA, Wallin C, Diekhans M, Maglott DR, et al. The consensus coding sequence (CCDS) project: Identifying a common protein-coding gene set for the human and mouse genomes. Genome Res. 2009;19(7):1316–23.
Landrum MJ, Lee JM, Riley GR, Jang W, Rubinstein WS, Church DM, et al. ClinVar: public archive of relationships among sequence variation and human phenotype. Nucleic Acids Res. 2014;42(Database issue):D980–5.
Stenson PD, Mort M, Ball EV, Shaw K, Phillips A, Cooper DN. The Human Gene Mutation Database: building a comprehensive mutation repository for clinical and molecular genetics, diagnostic testing and personalized genomic medicine. Hum Genet. 2014;133(1):1–9.
Heinzen EL, Swoboda KJ, Hitomi Y, Gurrieri F, Nicole S, de Vries B, et al. De novo mutations in ATP1A3 cause alternating hemiplegia of childhood. Nat Genet. 2012;44(9):1030–4.
de Carvalho AP, Sweadner KJ, Penniston JT, Zaremba J, Liu L, Caton M, et al. Mutations in the Na+/K+ -ATPase alpha3 gene ATP1A3 are associated with rapid-onset dystonia parkinsonism. Neuron. 2004;43(2):169–75.
Goedert M, Crowther RA, Spillantini MG. Tau mutations cause frontotemporal dementias. Neuron. 1998;21(5):955–8.
Iossifov I, O'Roak BJ, Sanders SJ, Ronemus M, Krumm N, Levy D, et al. The contribution of de novo coding mutations to autism spectrum disorder. Nature. 2014;515(7526):216–21.
EuroEPINOMICS-RES Consortium. Epilepsy Phenome/Genome Project, Epi4K Consortium. De novo mutations in synaptic transmission genes including DNM1 cause epileptic encephalopathies. Am J Hum Genet. 2014;95(4):360–70.
Chen YZ, Friedman JR, Chen DH, Chan GC, Bloss CS, Hisama FM, et al. Gain-of-function ADCY5 mutations in familial dyskinesia with facial myokymia. Ann Neurol. 2014;75(4):542–9.
Enns GM, Shashi V, Bainbridge M, Gambello MJ, Zahir FR, Bast T, et al. Mutations in NGLY1 cause an inherited disorder of the endoplasmic reticulum-associated degradation pathway. Genetics Med. 2014;16(10):751–8.
Hildebrand MS, Damiano JA, Mullen SA, Bellows ST, Oliver KL, Dahl HH, et al. Glucose metabolism transporters and epilepsy: only GLUT1 has an established role. Epilepsia. 2014;55(2):e18–21.
Homan CC, Kumar R, Nguyen LS, Haan E, Raymond FL, Abidi F, et al. Mutations in USP9X are associated with X-linked intellectual disability and disrupt neuronal cell migration and growth. Am J Hum Genet. 2014;94(3):470–8.
Puskarjov M, Seja P, Heron SE, Williams TC, Ahmad F, Iona X, et al. A variant of KCC2 from patients with febrile seizures impairs neuronal Cl- extrusion and dendritic spine formation. EMBO Rep. 2014;15(6):723–9.
PubMed CAS PubMed Central Google Scholar
Takata A, Xu B, Ionita-Laza I, Roos JL, Gogos JA, Karayiorgou M. Loss-of-function variants in schizophrenia risk and SETD1A as a candidate susceptibility gene. Neuron. 2014;82(4):773–80.
Epi4K Consortium, Epilepsy Phenome/Genome Project. De novo mutations in epileptic encephalopathies. Nature. 2013;501(7466):217–21.
Gilissen C, Hehir-Kwa JY, Thung DT, van de Vorst M, van Bon BW, Willemsen MH, et al. Genome sequencing identifies major causes of severe intellectual disability. Nature. 2014;511(7509):344–7.
McCarthy SE, Gillis J, Kramer M, Lihm J, Yoon S, Berstein Y, et al. De novo mutations in schizophrenia implicate chromatin remodeling and support a genetic overlap with autism and intellectual disability. Mol Psychiatry. 2014;19(6):652–8.
Zhu X, Need AC, Petrovski S, Goldstein DB. One gene, many neuropsychiatric disorders: lessons from Mendelian diseases. Nat Neurosci. 2014;17(6):773–81.
McLaren W, Pritchard B, Rios D, Chen Y, Flicek P, Cunningham F. Deriving the consequences of genomic variants with the Ensembl API and SNP Effect Predictor. Bioinformatics. 2010;26(16):2069–70.
The authors would like to thank the NHLBI GO Exome Sequencing Project and its ongoing studies which produced and provided exome variant calls for comparison: the Lung GO Sequencing Project (HL-102923), the WHI Sequencing Project (HL-102924), the Broad GO Sequencing Project (HL-102925), the Seattle GO Sequencing Project (HL-102926), and the Heart GO Sequencing Project (HL-103010). We would also like to thank Dr. Greg Gibson for helpful discussions and comments.
ABG is supported by the National Institute of Neurological Disorders and Stroke of the National Institutes of Health under Award Number F31NS092362. SP is a National Health and Medical Research Council of Australia (NHMRC) (CJ Martin) Early Career Fellow. This work was supported in part by the NIH Epi4K Sequencing, Bioinformatics and Biostatistics Core grant number U01NS077303. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The funders had no role in design, in the collection, analysis, and interpretation of data; in the writing of the manuscript; in the decision to submit the manuscript for publication.
Institute for Genomic Medicine, Columbia University, New York, NY, USA
Ayal B. Gussow, Slavé Petrovski, Quanli Wang & David B. Goldstein
Program in Computational Biology and Bioinformatics, Duke University, Durham, NC, USA
Ayal B. Gussow
Department of Medicine, The University of Melbourne, Austin Health and Royal Melbourne Hospital, Melbourne, VIC, Australia
Slavé Petrovski
Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, USA
Andrew S. Allen
Quanli Wang
David B. Goldstein
Correspondence to David B. Goldstein.
All authors participated in the design of the study. ABG coordinated the study and developed the supporting software. ABG and ASA designed the statistical models. QW generated the subRVIS scores. ABG prepared the manuscript with input from SP, ASA, and DBG. All authors read and approved the final manuscript.
Additional file 1:
A text file in BED format containing domain subRVIS sub-region boundaries. Each sub-region name contains three fields separated by a colon - the gene name; the domain type, which is either the CDD PSSM-Id of the aligned domain or a '-' denoting that no CDD domain was aligned to this region; the domain type again followed by an underscore, followed by the occurrence number of this domain type. (TXT 10178 kb)
A text file in BED format containing exon subRVIS sub-region boundaries. Each sub-region name contains three fields separated by a colon. Although for the exon sub regions only two fields are required, we used three fields to maintain consistency with the domain sub region boundaries file. Therefore the second and third fields are equal to each other. The format is: the gene name; the letter 'E' followed by the exon number; the letter 'E' followed by the exon number again. (TXT 6710 kb)
A text file of the FDR adjusted P values per-gene for the domain subRVIS gene-specific tests across the subset of genes for which we have at least two regions and at least one reported pathogenic variant. (TXT 35 kb)
A text file of the FDR adjusted P values per-gene for the exon subRVIS gene-specific tests across the subset of genes for which we have at least two regions and at least one reported pathogenic variant. (TXT 35 kb)
A text file containing the subset of genes for which we have reported pathogenic variants (3,046 genes). The columns denote whether or not we have subRVIS, RVIS, or subGERP scores for the gene in question. (TXT 53 kb)
A PDF containing Figure S1 and Figure S2, which show the distribution of score effect sizes from random permutations. (PDF 195 kb)
A text file containing the set of 250,000 random variants generated for the variant level predictors comparisons with the corresponding variant prediction scores and disease-associated status (1 indicates that the domain region the variant falls in has at least one previously reported pathogenic variant, 0 indicates that it has no previously reported pathogenic variants). (TXT 12240 kb)
A PDF containing the full results of the comparison to variant level predictors. (PDF 88 kb)
A text file with the standard deviation of the domain subRVIS scores, per gene, across genes with at least three regions. (TXT 671 kb)
Additional file 10:
A text file of domain: subRVIS scores, subGERP scores, genic RVIS scores, pathogenic counts, mutation rates, and coverage percentages. (TXT 7982 kb)
A text file of exonic: subRVIS scores, subGERP scores, genic RVIS scores, pathogenic counts, mutation rates, and coverage percentages. (TXT 15950 kb)
A text file with an estimate of the disease risk per protein domain. The format of this table is: PSSM ID; domain name; tally of reported pathogenic mutations in this domain type across genes; cumulative mutation rate in this domain type across genes; the tally divided by the cumulative mutation rate. (TXT 405 kb)
Gussow, A.B., Petrovski, S., Wang, Q. et al. The intolerance to functional genetic variation of protein domains predicts the localization of pathogenic mutations within genes. Genome Biol 17, 9 (2016). https://doi.org/10.1186/s13059-016-0869-4
RVIS
subRVIS
subGERP
Pathogenic
Submission enquiries: [email protected]
|
CommonCrawl
|
\begin{document}
\begin{abstract}
We consider a class of elliptic and parabolic problems,
featuring a specific nonlocal operator of fractional-laplacian
type, where integration is taken on variable domains.
Both elliptic and parabolic problems are proved to be uniquely solvable in
the viscosity sense.
Moreover, some spectral properties of the elliptic operator are investigated, proving existence and simplicity of the first eigenvalue. Eventually, parabolic solutions are proven to converge to
the corresponding limiting elliptic solution in the long-time limit. \end{abstract}
\maketitle
\section{Introduction}
The study of PDE problems driven by nonlocal operators is attracting an ever growing attention. This is in part motivated by the great relevance of such operators in applications, among which L\'evy processes, differential games, and image processing, just to mention a few.
The paramount example of nonlocal operator is the {\it
fractional laplacian}, which can be defined in the following
principal-value sense \begin{equation}\label{def} (-\Delta)_ {s} u (x) =
p.v.\int_{\mathbb{R}^N}\frac{u(x)-u(y)}{|x-y|^{N+2s}}dy \quad \text{ for} \ s\in(0,1). \end{equation}
In this paper we focus on elliptic and parabolic problems featuring a localized version of the classical fractional-laplacian operator, namely, \begin{equation}\label{10-6bis0} u(x)\mapsto p.v. \int_{
\Omega(x)} \frac{u(x)-u(y)}{|x-y|^{N+2s}} dy. \end{equation} In contrast with the classical fractional laplacian, the main feature in \eqref{10-6bis0} is that integration is taken with respect to a $x$-dependent bounded set $\Omega(x)$. Such a modification is inspired by the analysis of the hydrodynamic limit of kinetic equations \cite{pedro} and of peridynamics \cite{Silling00}. We comment on these connection in Subsection \ref{sec:connections} below.
In order to specify our setting further, let us recall that the homogeneous Dirichlet problem associated with the classical fractional laplacian $(-\Delta)_ {s}$\eqref{def} in a bounded set $\Omega\subset \mathbb{R}^N$ requires to prescribe the nonlocal boundary condition $u=0$ on $\mathbb{R}^N\setminus \Omega$. Under such condition the fractional laplacian can be written as \begin{align}\label{decomposition}
&(-\Delta)_ {s} u (x) = p.v.\int_{\mathbb{R}^N}\frac{u(x)-u(y)\chi_{\Omega}}{|x-y|^{N+2s}}dy=p.v.\int_{\Omega}\frac{u(x)-u(y)}{|x-y|^{N+2s}}dy+u(x)\int_{\mathbb{R}^N\setminus\Omega}\frac{1}{|x-y|^{N+2s}}dy
\\
&\quad =:(-\Delta)^{\Omega}_ {s} u (x) +k(x) u(x)\nonumber,
\end{align} where $\chi_\Omega$ indicates the characteristic
function of $\Omega$. The nonlocal operator $(-\Delta)^{\Omega}_ {s}$ is usually called {\it regional fractional laplacian}. By indicating with $d(x)$ the distance of $x\in \Omega$ from the boundary $\partial \Omega$, under mild regularity assumption on $\partial\Omega$ the function $k(x)$ can be proved to satisfy \begin{equation}\label{killing}
\frac{\alpha}{d(x)^{2s}}\le k(x)\le \frac{\beta}{d(x)^{2s}}, \end{equation}
for some $0<\alpha<\beta$. We provide a proof of this property
in
Lemma \ref{hbeha}.
Differently from the classical fractional laplacian \eqref{def}, the regional laplacian $(-\Delta)^{\Omega}_ {s}$ acts on functions defined in $\Omega$. Correspondingly, boundary conditions for the homogeneous Dirichlet problem for $(-\Delta)^{\Omega}_ {s}$ can be directly prescribed
on $\partial\Omega$.
The two operators $(-\Delta)_ {s}$ and $(-\Delta)^{\Omega}_ {s}$ differ especially in the vicinity of the boundary, as one can expect looking at \eqref{decomposition}. the boundary. While for any $s\in(0,1)$ the solution of the homogeneous Dirichlet problem associated with \eqref{def} behaves as $d(x)^s$ as $x$ approaches $\partial \Omega$, the one associated to $(-\Delta)^{\Omega}_ {s}$ goes to zero as $d(x)^{2s-1}$ for $s\in(1/2,1)$. In fact, for $s\in(0,1/2]$, the Dirichlet problem associated to the regional laplacian is not well-defined, independently of the regularity of $\partial \Omega$. This can be explained via trace theory (the trace operator exists only if $s>1/2$, see \cite{tar}, \cite{fall}), or in term of the probabilistic process associated to $(-\Delta)^{\Omega}_ {s}$ (such a process reaches the boundary only if $s>1/2$, see for instance \cite{bog}). Roughly speaking, we can say that the term $k(x) u(x)$ regularizes the operator $(-\Delta)^{\Omega}_ {s}$ close to the boundary, by forcing a quantified convergence of solution to zero on approaching $\partial \Omega$. The reader can find more on the relation between $(-\Delta)_ {s}$ and $(-\Delta)^{\Omega}_{s}$ in the recent survey \cite{alot} and in the references therein.
Inspired by position \eqref{10-6bis0}, by decomposition \eqref{decomposition}, and by property \eqref{killing}, we aim at considering more general nonlocal operators of the following form \begin{equation}\label{10-6} h(x) u (x)+\mathcal{L}_{s} (\Omega(x),u (x)), \end{equation}
where $\mathcal{L}_{s} (\Omega(x) ,u (x_0))$ is defined as \begin{equation}\label{10-6bis} \mathcal{L}_{s} (\Omega(x),u (x_0))=p.v. \int_{y\in
\Omega(x)} \frac{u(x_0)-u(y)}{|x-y|^{N+2s}} dy. \color{black} \end{equation} In the following, we often use the change of coordinates $z=x-y$. In this new reference frame we will write \begin{equation} \label{omegatxb} \tilde{\Omega}(x)=\{z\in\mathbb{R}^N \ : \ x-z\in \Omega(x)\}. \end{equation} In \eqref{10-6}, the function $h(x)\in C(\Omega)$ is assumed to be given and to fulfill \begin{equation}\label{alpha} 0< \frac{\alpha}{d(x)^{2s}}\le h(x)\le \frac{\beta}{d(x)^{2s}}, \ \ \ \mbox{with} \ \ \ 0<\alpha\le\beta, \end{equation} and the set-valued function $x\to\Omega(x)\subset\Omega$ is assumed to satisfy
\begin{align}\label{continuita}
& \quad \forall x \in \Omega:\quad \lim_{y\to x}|\Omega(y)\triangle\Omega(x)|=0,\\
& \quad \exists \ \zeta\in \big(0,1/2\big) , \ \forall x \in \Omega :\quad \tilde{\Omega}(x)\cap B_{r}(0)=\Sigma\cap B_{r}(0) \mbox{ for all } r\le \zeta d(x),\label{ostationary} \end{align}
for some given open set $\Sigma$ such that \begin{equation}\label{Sigmadef}
\Sigma=-\Sigma \quad \mbox{and} \quad \left|\Sigma\cap
\big(B_{2r}(0)\setminus B_r(0)\big)\right|\ge q|B_{2r}(0)\setminus B_r(0)| \mbox{ for any } r>0, \end{equation} with $q\in(0,1)$. Note that the symmetry of $\Sigma$ is required for the well-definiteness of the
operator on smooth functions and that assumption \eqref{continuita} ensure that operator $\mathcal{L}_{s}$ from \eqref{10-6bis} is
continuous with respect to his arguments. For a more extensive discussion on the hypothesis look at Section \ref{existingliterature}.
Our first goal is to show that the {\it elliptic problem} related to the nonlocal operator \eqref{10-6} given by \begin{equation}\label{10:17} \begin{cases} h(x)u (x)+\mathcal{L}_{s} (\Omega(x),u (x))= f(x) \qquad & \mbox{in } \Omega,\\
u(x) = 0 & \mbox{on } \partial\Omega ,\\ \end{cases} \end{equation} is uniquely solvable, for any $f\in C(\Omega)$ such that \begin{equation}\label{fcon}
\exists \ \eta_f\in (0,2s), C>0 \ : \ |f(x)|d(x)^{2s- \eta_f}\le C. \end{equation}
In the current setting it is natural to address problem \eqref{10:17} in the viscosity sense.
Indeed, assumptions \eqref{continuita}-\eqref{Sigmadef} imply that the operator \eqref{10-6} satisfies the comparison principle. Moreover the growth conditions \eqref{alpha} and~\eqref{fcon} allow us to build a barrier for problem \eqref{10:17} of the form \begin{equation}\label{barrierintro} C_\alpha d(x)^{ \eta} \ \ \ \mbox{for some small } \eta>0. \end{equation}
Having comparison and barriers at hand, we can implement the Perron method and prove in Theorem \ref{teide} the existence of a
unique viscosity solution for \eqref{10:17}. Note that $C_\alpha$ degenerates with $\alpha$. In particular,
if the term $h(x)$ degenerates
at the boundary, solvability of problem \eqref{10:17} may fail, at least for $s\in(0,1/2]$,
see the above discussion.
A second aim of our paper is to address the spectral properties of the operator \eqref{10-6}. We focus on the first eigenvalue associated to \eqref{10-6} with homogeneous Dirichlet boundary conditions and we show that there exists a unique $\overline{\lambda}>0$ such that the problem
\begin{equation}\label{10:17ter} \begin{cases} h(x)v (x)+\mathcal{L}_{s} (\Omega(x),v (x))= \overline{\lambda} v (x)\qquad & \mbox{in } \Omega,\\
v(x) = 0 & \mbox{on } \partial\Omega,\\ \end{cases} \end{equation} admits a strictly positive viscosity solution. Moreover such a solution is unique, up to multiplication by constants.
As our setting in nonvariational, the characterization of such first
eigenvalue follows the approach of the seminal work \cite{bere}. More precisely, we define $\overline{\lambda}$ as the supremum of the values $\lambda\in \mathbb{R}$ such that the problem \begin{equation}\label{refineed} \begin{cases} h(x)u(x)+\mathcal{L}_{s} (\Omega(x),u (x))= \lambda u (x) + f (x)\color{black}\qquad & \mbox{in } \Omega,\\
u(x) = 0 & \mbox{on } \partial\Omega\, ,\\
u(x) > 0 & \mbox{in } \Omega\, ,\\ \end{cases} \end{equation} admits a solution, for some given nonnegative and
nontrivial $f\in C(\Omega)$ satisfying
\eqref{fcon}, see Theorems \ref{mussaka}-\ref{helmholtz}.
As we shall see, this characterization does not depend
on the particular choice of $f$. Such a definition is slightly
different from the usual one (see for instance \cite{biri},
\cite{busca}, \cite{quaas} and \cite{biswas} for the same
approach in both local and nonlocal settings), being
based on the concept of solution instead of that of
supersolution.
A further focus of this paper is the study of the evolutionary counterpart of \eqref{10:17}, namely, the {\it
parabolic problem} \begin{equation}\label{10:17bis} \begin{cases} \partial_t u (t,x) (x)+h(x)u (x)+\mathcal{L}_{s} (\Omega (t,x),u (t,x))= f(t, x) \qquad & \mbox{in } (0,T)\times\Omega,\\
u(t,x) = 0 & \mbox{on } (0,T)\times\partial\Omega ,\\
u(0,x) = u_0(x) & \mbox{on } \Omega. \end{cases} \end{equation} We prove existence and uniqueness of a global-in time viscosity
solution to problem \eqref{10:17bis}. See Theorem \ref{existence} below for the precise statement in a more general setting, where a time-dependent version of assumption \eqref{ostationary} is considered. The behavior of the solution \color{black}
$u(x,t)$ for large times is then addressed in Theorem \ref{lalaguna}: Taking advantage of the characterization of $\overline{\lambda}$ we prove that for any $\lambda <\overline{\lambda}$ there exists $C_{\lambda}$ such that \[
|u(t,x)-u(x)|\le C_{\lambda} \overline{\varphi} (x) e^{-\lambda t}, \] where $\overline{\varphi}$ is the normalized positive eigenfunction associated to $\overline{\lambda}$, $u(x)$ is the solution of the elliptic problem \eqref{10:17} and $u(t,x)$ solves the parabolic problem \eqref{10:17bis} for the same time-independent forcing $f(x)$. \color{black}
\subsection{Relation with applications}\label{sec:connections}
As mentioned, the specific form of operator $\mathcal{L}_{s}$, in particular the
dependence of the integration domain $\Omega(x)$ on $x$, occurs in
connection with different applications.
A first occurrence of operators of the type of $\mathcal{L}_{s}$ is
the study of the hydrodynamic limit of collisional kinetic equations with a heavy-tailed thermodynamic equilibrium. When posed in the whole space, the reference nonlocal operator in the limit is the classical fractional laplacian \eqref{def}, see \cite{melletbis}. The restriction of the dynamics to a bounded domain with a zero inflow condition at the boundary asks for considering \eqref{10-6bis0} instead \cite{pedro}. In this connection, $\Omega(x)$ is defined to be the largest star-shaped set centered at $x\in\Omega$ and contained in $\Omega$. The heuristics for this choice is that a particle centered at $x$ is allowed to move along straight paths and is removed from the system as soon as it reaches the boundary. Hence, the possible interaction range of a particle sitting at $x$ is exactly $\Omega(x)$.
In particular, the resulting hydrodynamic limit under homogeneous Dirichlet conditions features the nonlocal functional \begin{equation}\label{start} a(x)u (x)+ p.v.\int_{\Omega(x)}
\frac{u(x)-u(y)}{|x-y|^{N+2s}} dy =: a(x)u (x) + (-\Delta)^\star_s u(x), \ \ \ s\in(0,1). \end{equation}
Here, the function
$a(x)$ has the following specific form \begin{equation}\label{15:15}
a(x)=\int_{\mathbb{R}^N}\frac{1}{|y|^{N+2s}}e^{-\frac{d(x,\sigma(y))}{|y|}}dy, \end{equation} with $d(x,\sigma(y))$ being the length of the segment joining $x\in \Omega$ with the closest intersection point between $\partial \Omega$ and the ray starting from $x$ with direction $\sigma(y)
=\frac{y}{|y|}$. Clearly, if $\Omega$ is convex one has that $\Omega(x)\equiv \Omega$ for all $x$ and the function $a(x)$ coincides, up to a constant, with the function $k(x)$ of \eqref{decomposition} (see Lemma \ref{acca}). In this case we recover exactly (up to a constant) the operator in \eqref{decomposition}. Note however that even in the case of a nonconvex domain $\Omega$, the function $a(x)$ satisfies the condition \eqref{alpha} (see Lemma \ref{hbeha}).
A second context where nonlocal operators of the type of \eqref{10-6bis} arise is that of {\it peridynamics} \cite{Silling00}. This is a nonlocal mechanical theory, based on the formulation of equilibrium systems in integral instead of differential terms. Forces acting on the material point $x$ are obtained as a combined effect of interactions with other points in a given neighborhood. This results in an integral featuring a radial weight which modulates the influence of nearby points in terms of their distance \cite{Emmrich,survey}. A reference nonlocal operator in this connection is \begin{equation}\label{peridyn} u(x) \mapsto
p.v.\int_{ B_{\rho}(x)} \frac{u(x)-u(y)}{|x-y|^{N+2s}} dy. \end{equation} Here, $ B_{\rho}(x)$ is the call of radius $\rho>0$ centered at $x$. In particular, the parameter $\rho$ measures the interaction range. Such operators have used to approximate the fractional laplacian in numerical simulations (see \cite{duo} and the reference therein), note also the parametric analysis in \cite{burkovska}. The operator $\mathcal{L}_s$ in \eqref{10-6bis} corresponds hence to a natural generalization of the latter to the case of an interaction range which varies along the body, as could be the case in presence of a combination different material systems. This would correspond to choosing a varying $\rho(x)$.
\subsection{Existing literature}\label{existingliterature}
To our knowledge, operator \eqref{10-6} has not been studied yet.
Despite its simple structure when compared to the general operators
usually allowed in the fully nonlinear setting, most of the available tools seem not to directly apply.
In this section, we aim at presenting a brief account of the literature in order to put our contribution in context.
Following the seminal work \cite{CIL}, the existence theory of viscosity solutions, through comparison principle, barriers, and Perron method, has been generalized to a large class of elliptic and parabolic integral differential equations, see for instance \cite{barimb}, \cite{bci}, \cite{caff}, \cite{cd}, \cite{cdbis} and \cite{jk}.
Comparison principles for nonlocal problems in the viscosity setting can be found in \cite{barimb}, see also \cite{jk}. One of the key structural assumptions of these works reads, in our notation, \begin{equation}\label{bbarles}
\int_{\mathbb{R}^N}|\chi_{\tilde{\Omega}(x)}-\chi_{\tilde{\Omega}(y)}||z|^{2-N-2s}dz\le c|x-y|^2 \end{equation} for some positive constant $c>0$ (see assumption (35) in \cite{barimb}). This type of condition allows the authors to implement the variable-doubling strategy of \cite{CIL} for a large class of operators of so-called L\'evy-Ito type. This is not expected here,
for our set of assumptions does not imply \eqref{bbarles}
as the following simple argument shows: let $\Omega$ be the unitary ball centered at the origin and $\tilde{\Omega}(x)=B_{\rho(x)}$ for any $x\in\Omega$ with $\rho(x)=d(x)^{ 1/(2-2s)}$. Then, for any $x,y\in\Omega$ we get \[
\int_{\mathbb{R}^N}|\chi_{\tilde{\Omega}(x)}-\chi_{\tilde{\Omega}(y)}||z|^{2-N-2s}dz=\frac{\omega_N}{2-2s}|\rho(x)^{2-2s}-\rho(y)^{2-2s}|=\frac{\omega_N}{2-2s}|x-y|. \] Our alternative strategy is to assume that the operator is somehow translation invariant \emph{close} to the singularity, with this property degenerating while \color{black} approaching $\partial\Omega$, see assumption \eqref{ostationary} above. This allows for some cancellations that bypass the problem of the singularity of the kernel. Instead of doubling variables, we use the inf/sup-convolution technique to prove that sum of viscosity subsolution is still a viscosity subsolution. Eventually, we also quote the interesting comparison result in \cite{gms}. There, the doubling of variable is combined with an optimal-transport argument. Such an approach, however, requires the uniform continuity of solutions,
and it is not clear how to adapt it for general viscosity solutions.
As far as the construction of barriers is concerned, notice that the typical difficulty is to estimate from below a term of the type \[
\int_{\mathbb{R}^N\setminus\Omega}\frac{1}{|x-y|^{N+2s}}dy, \]
which requires some regularity on the boundary of $\Omega$. Of course we refer here to the standard case of the fractional laplacian,
but the same idea can be extended to more general operators, see
\cite[Lemma 1]{bci}. In our case, since we impose a priori condition \eqref{alpha}, we can actually deal with any open domain, paying the price of a poor control on the decay of the solution close to the boundary, see \eqref{barrierintro}.
For an alternative approach to the Perron method, not relying on the comparison principle and therefore produces discontinuous viscosity solutions, we refer the interested reader to \cite{mou}.
For a comprehensive overview on the numerous contributions to the regularity theory for viscosity solutions to nonlocal elliptic and parabolic equations, we address the reader to the rather detailed
introductions of \cite{schwabsil} and \cite{krs}. Here, we provide a small overview on some results more closely related to our work. The first fully PDE-oriented result about H\"older regularity for elliptic nonlocal operators has been obtained in \cite{silve}. A drawback of this approach is that it does not allow to consider the limit $s\to1$. The first H\"older estimate which is robust enough to pass to the limit as $s\to1$ has been obtained in \cite{caff} (see also \cite{caffbis}) and then generalized to parabolic equation in \cite{cd} and \cite{cdbis}. All these results apply to operators whose kernel is pointwise controlled from above and from below by the one of $(-\Delta)_s$. For results where such pointwise control is not avaliable, we refer again to \cite{schwabsil} and \cite{krs}. More in detail, our condition \eqref{Sigmadef} is a simplified version of assumption (A3) in \cite{schwabsil}. Condition \eqref{Sigmadef} allows us to deduce an interior regularity estimate, in the spirit of the more general \cite[Theorem 4.6]{mou}.
\\
As mentioned, we define the first eigenvalue of \eqref{10-6} following the approach in \cite{bere}.
The advantage of this approach is that it is independent of
the variational structure of the
operator by directly relying on
the maximum principle, as well as on the existence of
positive (super-)solutions. For this reason it
has been fruitfully used in the framework of viscosity solution for
second order fully nonlinear differential equations, see for
instance \cite{biri}, \cite{busca}, and \cite{QS}
An early result related to eigenvalues of nonlocal operators with singular kernel is in \cite{bego} where existence issues in presence of lower order terms are tackled.
A closer reference is \cite{DQT}, where the principal eigenvalues of some fractional nonlinear equations, with inf/sup structure are studied. In this paper the authors prove, among other results, existence and simplicity of principal eigenvalues together with some isolation property and the antimaximum principle. Other results following the same line of investigation can be found in \cite{quaas} and \cite{biswas}. We point out that the operators considered in these works are just positively homogeneous (i.e. $\mathcal{L}(u)\neq-\mathcal{L}(-u)$), which gives rise to the existence of two principal {\it half-eigenvalues}, namely corresponding to differently signed eigenfunctions. We are not concerned with this phenomenon here.
A common tool used in the previous works to prove existence of eingenvalues is a nonlinear version of the Krein-Rutmann theorem for compact operators, see again \cite{quaas} and \cite{biswas} and the references therein. Let us also mention the recent
\cite{birga} and \cite{bismod}, that deal with a different kind of fractional operators.
To prove existence of the first eigenvalue, we follow a direct approach based on the approximation of problem \eqref{10:17ter}. Unlike the previously quoted papers, we do not resort to a global regularity result to deduce either the compactness of the operator or the uniform convergence of approximating solutions. Instead, we combine the so-called {\it half-relaxed-limit} method, a version of the {\it refined maximum principle} and interior regularity result of \cite{schwabsil,mou}. The half-relaxed limit method is a powerful tool to pass to the limit with no other regularity then uniform boundedness, see for instance \cite{barles} and the references therein. In general, the price to allow such generality is to handle discontinuous viscosity solutions. We however avoind this, for we are able to prove a refined maximum principle for \eqref{refineed} in the spirit of \cite{bere}. This, in turns, provides a comparison between sub and super solutions, eventually ensuring \color{black} continuity.
Due to particular structure of our nonlocal operator, a key ingredient in our proof is a restriction procedure for \eqref{10-6bis} in subdomains of $\Omega$, which is where the density assumption \eqref{Sigmadef} is needed. Heuristically speaking, such assumption forces the operator to \emph{look the same
at every scale}, as for the kernel singularity and the
behavior \eqref{alpha} of the term $h$ (see Lemma \ref{localization}).
Eventually, our refined maximum principle allows us to show
uniqueness of the first eigenvalue and its simplicity.
A general reference for the long-time behavior of solutions to
nonlocal parabolic equation is the monograph \cite{bookjulio}. We also refer to \cite{berebis} and to the already mentioned \cite{quaas}.
In the latter work, a fractional operator with drift term is considered and the viscosity solution of the associated homogeneous
initial-boundary value problem is proved to converge to zero in the
large-time limit.
Here, we consider a non-homogeneous parabolic equation with initial datum and homogeneous Dirichled boundary condition.
Moreover, we allow the coefficients of our operator to be time-dependent. Under suitable assumption on this time dependency, we prove that the parabolic solution converges to the stationary one exponentially in time. The exponenential rate of convergence depends on the principal eigenvalue.
\section{Statement of the main results}
In this section, we collect our notation and state our main results.
Given any $D\subset \mathbb{R}^{M}$, we indicate \color{black} upper and lower semicontinuous functions on $D$ as \[ \text{\rm USC}(D)=\left\{u:D\to \mathbb{R} \ : \ \limsup_{z \to z_0}u(z)\le u(z_0)\right\}, \] \[ \text{\rm LSC}(D)=\left\{u:D\to \mathbb{R} \ : \ \liminf_{z\to z_0}u(z)\ge u(z_0)\right\}. \] We also write $\text{\rm USC}_b(D)$ and $\text{\rm LSC}_b(D)$ for the set of upper and lower semicontinuous functions that are bounded. Given a function $u:D\to \mathbb R$ we indicate \color{black} its {\it upper and lower semicontinuous envelopes} as \[ u^*(z_0)=\limsup_{z\to z_0}u(z) \quad \text{and} \quad u_*(z_0)=\liminf_{z\to z_0}u(z), \] respectively. \color{black}
\begin{defin}[Viscosity solutions]\label{def1}
Elliptic case:
We say that $u\in \text{\rm USC}_b( \Omega )$ ($\in \text{\rm LSC}_b( \Omega )$) is a \emph{viscosity sub (super) solution} to the equation \[ h(x)u (x)+\mathcal{L}_{s} (\Omega(x),u (x))= f(x) \]\color{black} if, whenever $x \in \Omega $ and $\varphi\in C^2( \Omega)$ are such that $u( x )=\varphi(x )$ and $u(y)\le \varphi(y)$ for all $y\in\Omega$, then \begin{equation*}
h( x ) \varphi(x )+\mathcal{L}_{s} (\Omega(x),\varphi(x ))\le (\ge) \ f(x ). \end{equation*}
Moreover $u\in C(\overline \Omega)$ is a \emph{viscosity solution} to problem \eqref{10:17} if is both sub- and supersolution and satisfies the boundary condition $u=0$ on $\partial \Omega$ pointwise.\color{black}
Parabolic case: \color{black} We say that $u\in \text{\rm USC}_b( (0,T)\times\Omega )$ ($\in \text{\rm LSC}_b( (0,T)\times\Omega )$) is a \emph{viscosity sub (super) solution} to the equation \[ \partial_tu(t,x) +h(x) u(t,x)+\mathcal{L}_{s} (\Omega(t,x),u(t,x) )= f(t,x) \qquad \mbox{in } (0,T)\times\Omega \] \color{black} if, whenever $( t , x )\in (0,T)\times\Omega $ and $\varphi\in C^2((0,T)\times{\Omega})$ are such that $u( t , x )=\varphi( t , x )$ and $u(\tau,y)\le \varphi(\tau,y)$ for all $\tau \color{black},y\in(0,T)\times\Omega$, then \begin{equation*}
\partial_t \varphi( t , x )+ h( x ) \varphi( t , x )+\mathcal{L}_{s} (\Omega(t,x),\varphi( t , x ))\le (\ge) \ f( t , x ). \end{equation*}
Moreover $u\in C([0,T)\times\overline \Omega)$ is a \emph{viscosity solution} to \eqref{parabolic} if it is both sub- and supersolution and satisfies the boundary and initial conditions
\begin{align}
\label{parabolic}
\left\{ \begin{array}{ll}
u(t,x) = 0 & \mbox{on } (0,T)\times\partial\Omega\, ,\\
u(0,x) = u_0(x) & \mbox{on } \Omega,
\end{array}
\right. \end{align} pointwise.\color{black} \end{defin}
We are now in the position of stating our main results.
\begin{teo}[Well-posedness of the elliptic problem]\label{teide} Let us assume \eqref{alpha}-\eqref{Sigmadef}, and
\eqref{fcon}. Then, problem \eqref{10:17} admits a unique viscosity solution $u$. This satisfies $|u(x)|\le C d(x)^{\eta}$ for some suitable small $\eta>0$ and large $C >0$ . \end{teo}
\begin{teo}[Well-posedness of the elliptic first-eigenvalue
problem]\label{mussaka} Under the same assumption of Theorem \ref{teide}, there exists a unique $\overline{\lambda}>0$ \color{black} such that \begin{equation}\label{eigenproblem} \begin{cases} {h}(x)v (x)+{\mathcal{L}}_s (\Omega(x), v(x))=\overline{\lambda} v (x)\qquad & \mbox{in } \Omega,\\
v(x) = 0 & \mbox{on } \partial \Omega, \end{cases} \end{equation} admits a nontrivial (strictly) positive viscosity solution. Such solution is unique, up to a multiplicative constant. Moreover, the first eigenvalue can be characterized as $\overline{\lambda}=\sup E$, where \begin{equation}\label{eq:E} E=\{\lambda\in\mathbb R \ : \ \exists v\in C(\overline{\Omega}), \ v>0 \mbox{ in } \Omega \ v=0 \mbox{ on } \partial\Omega \ \ \mbox{such that} \ \ hv+\mathcal{L}_s(v)= \lambda v+f\}. \end{equation} and $f\in C(\Omega)$ is any given positive and nonzero function satisfying \eqref{fcon}. Note in particular, that the set $E$ is independent of $f$. \end{teo}
\begin{teo}[ Well-posedness of the elliptic problem below the first eignevalue]\label{helmholtz}
Under the same assumption of Theorem \ref{teide}, for any
$\lambda<\overline{\lambda}$, the problem \begin{equation}\label{helmeq} \begin{cases} {h}(x)u (x)\color{black}+{\mathcal{L}}_s (\Omega(x),u(x)\color{black})=\lambda u (x)\color{black}+ f(x) \qquad & \mbox{in } \Omega,\\
u(x) = 0 & \mbox{on } \partial \Omega, \end{cases} \end{equation} admits a unique viscosity solution. In particular, we have $(-\infty,\overline\lambda)=E$ where the set $E$ is defined in \eqref{eq:E}.
\end{teo}
Let us now turn to the parabolic problem.
Before stating our results, we present a time-depending generalization of the hypothesis of Theorem \ref{teide}. We assume
the set valued function $(t,x)\mapsto\Omega(t,x)\subset\Omega$ to fulfill \color{black} the following assumptions \begin{align}\label{contpar}
& \quad \forall (t,x) \in (0,\infty)\times\Omega: \ \ \ \lim_{(\tau,y)\to (t,x)}|\Omega(\tau,y)\triangle\Omega(t,x)|=0,\\
& \quad \forall \ T >0, \ \exists \ \zeta\in \big(0,1/2\big) \ :\ \forall (t,x) \in (0,T)\times\Omega \ \ \ \tilde{\Omega}(t,x)\cap B_{r}(0)=\Sigma\cap B_{r}(0),\label{omegax} \end{align}
for all $r\le \zeta d(x)$ and $\Sigma$ as in \eqref{Sigmadef}. Recall that $\tilde{\Omega}(t,x)=\{z\in\mathbb{R}^N \ : \ x-z\in \Omega(t,x)\}$. Moreover, we let $h\in C((0,\infty)\times\Omega)$ satisfy for $0<\alpha\le\beta$ \begin{equation}\label{alphap} \frac{\alpha}{d(x)^{2s}}\le h(t,x)\le \frac{\beta}{d(x)^{2s}}; \end{equation} and we assume that $f\in C((0,\infty)\times\Omega)$, $u_0\in C(\Omega)$, and that there exists $ \eta_1\in(0,2s)$ such that \begin{equation}\label{fconintro}
|f(x,t)|d(x)^{2s-\eta_1}\le C, \end{equation} \begin{equation}\label{u0con}
|u_0(x)|d(x)^{-\eta_1}\le C. \end{equation}
\begin{teo}[Well-posedness of the parabolic
problem] \label{existence}
Let us fix $T\in(0,\infty]$. Under assumptions \eqref{Sigmadef},
\eqref{contpar}-\eqref{u0con} there exists a unique viscosity solution
\color{black} $u\in C([0,T)\times\overline\Omega)\cap
L^{\infty}([0,T)\times \Omega)$ of \color{black}
\begin{align}
\label{parabolic}
\left\{ \begin{array}{ll} \partial_tu(t,x) +h(t , x) u(t,x)+\mathcal{L}_{s} (\Omega(t,x),u(t,x) )= f(t,x) \qquad &\mbox{in } (0,T)\times\Omega,\\
u(t,x) = 0 & \mbox{on } (0,T)\times\partial\Omega\, ,\\
u(0,x) = u_0(x) & \mbox{on } \Omega.
\end{array}
\right. \end{align}
\end{teo}
Finally, we address the asymptotic behavior of the solution provided by Theorem \ref{existence} as $T \to \infty$. In order to do that, we require that all the time-dependent data in \eqref{parabolic} suitably converge to their stationary counterparts in \eqref{10:17}. In particular, we assume that, for some $\eta_2\in(0,2s)$ and $\lambda<\overline{\lambda}$, \begin{align}\label{decaydata}
&\left(|f(t,x)-f(x)|+|h(t,x)-h(x)|\right)d(x)^{2s-\eta_2}e^{\lambda
t}\le C_1,\\
\label{decayomega}
&|\Omega(t,x)\Delta\Omega(x)|d(x)^{-N-\eta_2}e^{\lambda t}\le C_2. \end{align} Assumption \eqref{omegax} must also be strengthen \color{black} as follows \begin{equation}\label{threeprime}
\exists \ \zeta\in \big(0,1/2\big),\ \forall (t,x) \in I\times\Omega: \ \ \ \tilde{\Omega}(t,x)\cap B_{r}(0)=\Sigma\cap B_{r}(0) \color{black} . \end{equation} \begin{teo}[Long-time behavior] \label{lalaguna} Let us assume \eqref{alpha}-\eqref{ostationary}, \eqref{contpar}-\eqref{u0con}, and \eqref{decaydata}-\eqref{threeprime}. Let $u$ \color{black} be the unique viscosity solution of the parabolic problem \eqref{parabolic} on $(0,\infty)$, $v$ be the unique solution of the elliptic problem \eqref{10:17}, and let $\lambda$ fulfill \eqref{decaydata}.
Then, there exists $C=C(\lambda)>0 $ such that $
|u(x,t)- v(x)|\le C d(x)^{\eta} e^{-\lambda t}$ for some small $\eta>0$. \end{teo}
\section{Preliminary material}
We collect in this section some preliminary lemmas, which will be used in the proofs later on. \subsection{Background on viscosity theory} In the following, we will often make use of the continuity of the integral operator with respect to a suitable convergence of its variables. We state this property in its full generality, in order to be able to apply it in different contests throughout the paper. \begin{lemma}[Continuity of the integral operator]\label{continuity} Let us consider a sequence of points $x_n\to\bar x\in\Omega$ and a family of bounded sets $\Theta(x_n), \Theta(\bar x)$ such that $\chi_{\Theta(x_n)}\to \chi_{\Theta(\bar x)}$ almost everywhere and, for some $\delta>0$, \[
\ \tilde{\Theta}(x_n)\cap B_{r}(0)=\Sigma\cap B_{r}(0) \quad \mbox{ for all } r\le \delta\mbox{ and } n>0, \]with $\Sigma$ as in \eqref{Sigmadef}. Assume moreover that $\phi_n\to\phi$ pointwise with \begin{equation}\label{c11} \begin{split}
\left|\phi_n(x_n)-\phi_n(x_n+z)-q_n \cdot z\right|&\le C|z|^2\\
\left|\phi(\bar x)-\phi(\bar x+z)-q \cdot z\right|&\le C|z|^2
\end{split} \ \ \ \ \ \ \mbox{ for } \ \ \ |z|\le \delta, \end{equation} for some $q_n, q\subset \mathbb{R}^N$ and $C$ a positive constant that does not depend on $n$. Then \[ \mathcal{L}_{s} (\Theta(x_n),\phi_n(x_n))\to \mathcal{L}_{s} ( \Theta(\bar x),\phi(\bar x)). \] \end{lemma} \begin{proof} For any $r<\delta$, we can decompose the integral operator as follows \begin{equation} \label{eq1} \mathcal{L}_{s} (\Theta(x_n),\phi_n(x_n))=\mathcal{L}_{s} (\Theta(x_n)\setminus B_{ r}(x_n),\phi_n(x_n))+\mathcal{L}_{s} (\Theta(x_n)\cap B_{ r}(x_n),\phi_n(x_n)). \end{equation} Notice that the first integral in the right-hand side of \eqref{eq1} is nonsingular and, for any fixed $r$, it readily passes to the limit as $n\to\infty$, thanks to the convergence of $\Theta(x_n)$ and $\phi_n$. Instead, the second one has to be meant in the principal value sense \begin{align*} \mathcal{L}_{s} (\Theta(x_n)\cap B_{ r}(x_n)&,\phi_n(x_n))\\
=\lim_{\rho\to0}\int_{\rho\le|z|\le r}&\frac{\phi_n(x_n)-\phi_n(x_n+z)}{|z|^{N+2s}}\chi_{\Sigma}dz=\lim_{\rho\to0}\int_{\rho\le|z|\le r}\frac{\phi_n(x_n)-\phi_n(x_n+z)-q_n\cdot z}{|z|^{N+2s}}\chi_{\Sigma}dz, \end{align*} where we have used that $\Sigma=-\Sigma$. Thanks to assumption \eqref{c11} we get \[
|\mathcal{L}_{s} (\Theta(x_n)\cap B_{ r}(x_n),\phi_n(x_n))|\le C\frac{\omega_N}{2-2s} r^{(2-2s)}. \] Being a similar computation valid for $\mathcal{L}_{s} ( \Theta(\bar x),\phi(\bar x))$, we get that \[
|\mathcal{L}_{s} (\Theta(x_n),\phi_n(x_n))- \mathcal{L}_{s} ( \Theta(\bar x),\phi(\bar x))|\le |\mathcal{L}_{s} (\Theta(x_n)\setminus B_{ r},\phi_n(x_n))- \mathcal{L}_{s} ( \Theta(\bar x)\setminus B_{ r},\phi(\bar x))|+2C\frac{\omega_N}{2-2s} r^{(2-2s)}. \] The assertion follows by taking the limit in the inequality above,
first as $n\to\infty$ and then as $ r\to 0$. \end{proof}
Now we provide a suitably localized, equivalent definition of viscosity solutions, which will turn out useful in proving the comparison Lemma \ref{timecomp} below. Such \color{black} equivalence is already known (see, for instance, \cite{barimb}). For completeness, we give here a statement and a proof \color{black} in the elliptic \color{black} case.
\begin{lemma}[Equivalent definition]\label{def2} We have that \color{black} $u\in \text{\rm USC}_b( \Omega )$ ($\in \text{\rm LSC}_b( \Omega )$) is a viscosity subsolution (supersolution, respectively) to the equation in \eqref{10:17} if and only if, \color{black} whenever $x_0 \color{black}\in \Omega $ and $\varphi\in C^2(\Omega )$ are such that $u( x_0 \color{black} )=\varphi( x_0 \color{black} )$ and $u( y)\le \varphi( y)$ for all $y \in \Omega$, then for all $B_r( x_0 \color{black} )\subset \Omega$ the function \begin{equation}\label{16:57}
\varphi_r(x)= \begin{cases} \varphi(x) \qquad & \mbox{in } B_r( x_0 ),\\
u(x) & \mbox{otherwise, } \end{cases} \end{equation} satisfies \begin{equation}\label{serve}
h( x_0 \color{black} ) \varphi_r( x_0 \color{black} )+\mathcal{L}_{s} (\Omega( x_0 \color{black}),\varphi_r( x_0 \color{black}
))\le (\ge) \ f( x_0 \color{black} ). \color{black} \end{equation}
A similar result hold in the parabolic case.
\end{lemma} \begin{proof} We prove only the equivalence in the case of subsolutions, the case of supersolution being identical. \color{black}
Let us assume at first that $u\in \text{\rm USC}_b(\Omega)$ fulfills the conditions of Lemma \ref{def2}. We want to check that is a viscosity subsolution in the sense of Definition \ref{def1}. Let $\varphi \color{black} \in C^2(\Omega)$ be such that $u-\varphi $ has a global maximum at $x_0$ and $\varphi(x_0)=u(x_0)$. It then follows that, for any $B_{r}(x_0)\subset\Omega$, \begin{align*} f(x_0)&\ge h(x_0)\varphi_r( x_0 )+\mathcal{L}_{s} (\Omega(x_0),\varphi_r( x_0 ))\\[1mm]
&=h(x_0)u(x_0)+\int_{ \Omega(x)\setminus B_{r}(x_0) } \frac{u(x)-u(y)}{|x-y|^{N+2s}} dy+\int_{ \Omega(x)\cap B_{r}(x_0) } \frac{u(x)-\varphi(y)}{|x-y|^{N+2s}} dy\\ &\ge h(x_0)u(x_0)+\int_{ \Omega(x)}
\frac{u(x)-\varphi(y)}{|x-y|^{N+2s}} dy\\[1mm]
&=h(x_0)\varphi ( x_0 )+\mathcal{L}_{s} (\Omega(x_0),\varphi (
x_0 )) , \end{align*} where the first inequality $\ge$ comes from \eqref{serve} and the second one follows as $u\le \varphi$ in $\Omega$.
To show the reverse implication, let us assume that $u\in \text{\rm USC}_b(\Omega)$ \color{black} is a viscosity subsolution to \eqref{10:17} according to Definition \ref{def1}. Let $\varphi\in C^2({\Omega})$ be such that $u-\varphi$ has a maximum at $x_0\in\Omega$ and $\varphi(x_0)=u(x_0)$ and, for any $B_{r}(x_0)\subset\Omega$, let $\varphi_{r}$ be the auxiliary function defined in \eqref{16:57}. As a first step, \color{black} we modify $\varphi_{r}$ \color{black} as $\varphi_{r,n}(x)=\varphi_{r}(x)
\color{black}+\frac1n|x-x_0|^2$ and notice that, for any $n\in\mathbb{N}$, the function \color{black} $u-\varphi_n$ has a strict local maximum at $x_0$ and \[ u-\varphi_{r,n}\le -\frac {r^2} {4n} \ \ \ \mbox{in} \ \ \ \Omega\setminus B_{\frac{ r}{2}}(x_0). \]
Let $\psi_1,\psi_2\in C^{\infty}(\Omega)$ be a partition of unity \color{black}
associated to the concentric balls $B_{\frac{ r}{2}}(x_0)$ and $B_{\frac{3 r}{4}}(x_0)$, namely, $0\le\psi_ , \, \psi_2\le 1$, $\psi_1=1$ in $B_{\frac{ r}{2}}(x_0)$, $\psi_1=0$ in $\Omega\setminus B_{\frac{3 r}{4}}(x_0)$, and $\psi_1+\psi_2=1$. Let us finally set \[ \zeta_n=\psi_1\varphi_{r,n}+\rho_{m_n}* (\psi_2\varphi_{r,n}) \]
where $\rho_{m_n}$ is a mollifier and $\{m_n\}_{n\in\mathbb{N}}$ is a sequence of numbers converging to $0$ to be suitably determined. Notice that $\zeta_n(x)\equiv \varphi_{r}(x)\color{black}+\frac1n|x-x_0|^2$ in $B_{\frac{ r}{2}}(x_0)$ and that $\zeta_n(x_0)=u(x_0)$ for $n$ large. Moreover, thanks to the properties of mollifiers, for any $n\in \mathbb N$ there exists $m_n\in\mathbb{N}$ such that \[ u-\zeta_n\le -\frac{ r^2}{8n} \ \ \ \mbox{in} \ \ \ \Omega\setminus B_{\frac{ r}{2}}(x_0). \] Then, $\zeta_n\in C^{2}$ is a good test function for Definition \ref{def1} and we have \[ h(x_0)u(x_0)+\mathcal{L}_{s}(\Omega(x_0),\zeta_n(x_0)) \le \ f(x_0). \] Taking the limit as $n\to\infty$ we prove the assertion applying Lemma \ref{continuity}. Note indeed that $\zeta_n\to \varphi$ in $C_2(B_{\frac r2}(x_0))$ and $\zeta_n\to u$ pointwise in $\Omega\setminus B_{\frac r2}(x_0)$. \end{proof}
In proving the stability of families of viscosity solutions, a suitable notion of limit for sequences of upper semicontinuous functions has to be considered, see for instance \cite{caff}. We introduce the following.
\begin{defin}[$\Gamma$-convergence] A sequence of upper-semicontinuous functions $v_n$ is said to \emph{$\Gamma$-converge to $v$ in $ D\subset \mathbb{R}^M$} if \begin{align}
&\label{gamma1} \mbox{for all converging sequences } z_n\to \bar z \mbox{ in } D \ : \quad
\limsup_{n\to\infty}v_n(z_n)\le v(\bar z)\\
&\label{gamma2}
\mbox{for all $\bar z\in D$ there exists a sequence } z_n\to \bar z \ \ : \ \
\lim_{n\to\infty}v_n(z_n)=v(\bar z).
\end{align} \end{defin}
This concept corresponds (up to a sign change) to a localized version of the
classical $\Gamma$-convergence notion, see \cite{DalMaso93}, hence
the same name.
Clearly, uniformly converging sequences in $\Omega$ are also $\Gamma$-converging. Moreover, $\Gamma$-convergence readily ensues in connection with the upper-semicontinuous envelope of a family of functions. Both examples will play a role in the sequel.
The following stability result is an adaptation of the classical one provided in Proposition 4.3 of \cite{CIL} (see also Lemma 4.5 in \cite{caff}). \begin{lemma}[Stability]\label{stability} Let us consider $v\in \text{\rm USC}_b((0,T)\times\Omega)$ and $f\in C((0,T)\times\Omega)$. Assume moreover that \begin{align*} i)&\quad \{v_n\}\subset \text{\rm USC}_b((0,T)\times\Omega) \ \ \text{$\Gamma-$converges to $v$ in $(0,T)\times\Omega$,}\\
ii)&\quad f_n \to f, h_n \to h \ \ \text{locally uniformly}, \mbox{ and } |\Omega_n\triangle\Omega|\to \mbox{ as } n\to\infty, \\ iii)&\quad \partial_tv_n(t,x) +h_n( t, x)v_n(t,x) +\mathcal{L}_{s} (\Omega_n( t, x),v_n( t,
x) )\le f_n(
t, x) \ \ \ \mbox{in} \ \ \ (0,T)\times\Omega \ \ \text{
in the viscosity sense.} \end{align*}
Then, if $\Omega( t, x)$ satisfies \eqref{contpar},\eqref{omegax}, it follows that \[ \partial_tv( t, x)+h( t, x)v( t, x)+\mathcal{L}_{s} (\Omega( t, x),v( t, x))\le f( t, x) \] in the viscosity sense. \end{lemma} \begin{remark}\label{ellipticparabolic}\rm An elliptic version of the Lemma holds true by assuming \eqref{continuita}-\eqref{Sigmadef}. Notice that the stationary case can be straightforwardly obtained from the evolutionary one upon interpreting
$u:\Omega \to \mathbb{R}$ as a trivial time-dependent function
$\tilde u(t,x)= u(x)$ on $(0,T)\times\Omega$. In
fact, if such a function is touched from above or from below
by a smooth function $\phi\in C^2((0,T)\times\Omega)$ at some
$(t,x)\in(0,T)\times\Omega$, we have that $\partial_t
\varphi(t,x)=0$. We hence conclude that such time-dependent
representation $\tilde u(t,x) $ of a subsolution (or supersolution) $u(x)$ of the
elliptic problem is subsolution (supersolution, respectively) of
its parabolic counterpart. \end{remark} \begin{proof}[ Proof of Lemma \ref{stability}] Let us assume that $v-\varphi$ has a strict global maximum, equal to
$0$, at $(\bar t,\bar x)\in (0,T)\times \Omega$. Taking $\varphi_{\theta}=\varphi+\theta (|t-\bar t|^2+|x-\bar x|^2)$, we have that also the $\sup (v-\phi_{\theta})$ is reached only at $(\bar t,\bar x)$. Owing to the assumption $i)$, we know that there exists a sequence of points $\{( \tau_{n}, y_{n})\}\subset (0,T)\times\Omega$ such that \begin{equation}\label{6.10} ( \tau_{n}, y_{n}, v_n(\tau_{n}, y_{n}))\to (\bar t,\bar x, v(\bar t,\bar x)). \end{equation}
Thanks to the penalization in the definition of $\varphi_{\theta}$ and assumption $i)$, for $n$ large enough we have that \[ v_n(\tau_{n}, y_{n})-\varphi_{\theta}(\tau_{n}, y_{n})\le \sup_{(0,T)\times \Omega} (v_n-\varphi_{\theta})=v_n(t_{n}, x_{n})-\varphi_{\theta}(t_{n}, x_{n}) =\epsilon_n, \] for some $\{(t_{n}, x_{n})\}\in (0,T)$ such that, up to a not relabeled subsequence, $(t_n,x_n)\to(\tilde t, \tilde x)\in (0,T)\times\Omega$. Using again the $\Gamma-$convergence of $v_n$, we find \[ (v-\varphi_{\theta})(\tilde t, \tilde x)\ge \limsup_{n\to\infty }(v_n-\varphi_{\theta})(t_{n}, x_{n}) \ge\lim_{n\to\infty}(v_n-\varphi_{\theta})(\tau_{n}, y_{n})=(v-\varphi_{\theta})(\bar t,\bar x)=0. \] Since the supremum of $v-\varphi$ is strict, this implies that $(\tilde t, \tilde x)=(\bar t, \bar x)$ and that $\epsilon_n\to 0$. Moreover, setting $\varphi_n=\varphi_{\theta}+\epsilon_n$, it follows that $\sup_{(0,T)\times \Omega}(v_n-\varphi_{n})=(v_n-\varphi_{\theta})(t_{n}, x_{n})=0$.
In conclusion, we have proved that $v_n-\varphi_n$ has a global maximum at $(t_n,x_n)$ (for $n$ large enough) and $v_n(t_n,x_n)=\varphi_n(t_n,x_n)$, that $\epsilon_n\to 0$, and that $(t_n,x_n)\to(\bar t, \bar x)$. Since each $v_n$ is a subsolution at $(\bar t, \bar x)$ we get \[ \partial_t\varphi_n( t_n,x_n)+h_n( t_n,x_n)v_{n}( t_n,x_n)+\mathcal{L}_{s} (\Omega_n(t_n,x_n),\varphi_n( t_n,x_n))\le f_n( t_n,x_n). \] The first two terms in the left-hand side and the one in the right-hand side easily pass to the limit for $n \to \infty$, thanks to the continuity of $h$ and $f$ and to the definition of $\varphi_n$. In order to deal with the integral operator notice that \[ \mathcal{L}_{s} (\Omega_n(t_n,x_n),\varphi_n( t_n,x_n))=\mathcal{L}_{s} (\Omega_n(t_n,x_n),\varphi( t_n,x_n)). \] Since $\varphi$ is smooth, we can use Lemma \ref{continuity}, with $\Theta(x_n)=\Omega_n(t_n,x_n)$ and $\phi_n(\cdot)=\varphi( t_n,\cdot)$, and pass to the limit with respect to $n$. Eventually, we get \[ \partial_t\varphi_{\theta}( \bar t,\bar x)+h( \bar t,\bar x)v ( \bar t,\bar x)+\mathcal{L}_{s} (\Omega( \bar t,\bar x),\varphi_{\theta}( \bar t,\bar x))\le f( \bar t,\bar x), \] for any $\theta>0$. Taking the limit (using Lemma \ref{continuity} again) as $\theta\to 0$, we obtain the desired result.\color{black} \end{proof}
The previous stability result highlights the robustness of
the notion of viscosity solution in relation to limit procedures. Notice that, given any uniformly bounded sequence of viscosity (sub/super) solutions of a certain family of equations, one can always find a $\Gamma-$limit and this is the candidate (sub/super) solution for the limiting equation. Such candidate is given by the lower/upper half relaxed limit \[ \overline{v}(x)=\sup\{\limsup_{n\to\infty} v_n(x_n) \ : \ x_n\to x\}, \ \ \ \ \underline{v}(x)=\inf\{\liminf_{n\to\infty} v_n(x_n) \ : \ x_n\to x\}. \] It is easy to check that $v_n$ $\Gamma-$converges to $\overline{v}$ and $-v_n$ $\Gamma-$converges to $-\underline{v}$. The key point here is that no compactness on the sequence $v_n$ is required for the existence of $\overline{v}$ and $\underline{v}$, as boundedness suffices. As we shall see in Section \ref{seceigenvalue}, this is a particularly powerful tool when dealing with equations that satisfy a comparison principle. \color{black}
\subsection{Sup- and infconvolution} In the sequel we often need to determine the equation (or inequality) solved by the difference of sub- or supersolutions. Note that, since such functions are not smooth, the property of being a sub- or supersolution may be not preserved by taking differences. To deal with this difficulty, we need to use suitable regularization of the involved functions. Let us start recalling the definition of {\it supconvolution} of $u\in \text{\rm USC}_b((0,T)\times\Omega)$, namely, \begin{equation}\label{supcon}
u^{\epsilon}(t,x)=\sup_{(\tau, y) \in (0,T)\times\Omega}\left\{u(\tau,y)-\frac1{\epsilon}(|x-y|^2+|t-\tau|^2)\right\}. \end{equation} Notice that, since $u$ is upper semicontinuous and bounded, for $\epsilon$ small enough the supremum above is reached inside $(0,T)\times\Omega$. To be more precise, let us adopt the following notation: for any $(t,x)\in(0,T)\times\Omega$, let $(t^{\epsilon},x^{\epsilon})$ be a point with the following property \begin{equation}\label{01-06bis}
u^{\epsilon}(t,x)=u(t^{\epsilon},x^{\epsilon})-\frac1{\epsilon}(|x-x^{\epsilon}|^2+|t-t^{\epsilon}|^2). \end{equation} Then, \begin{equation}\label{control}
(|x-x^{\epsilon}|^2+|t-t^{\epsilon}|^2)\le 2\epsilon \|u\|_{L^{\infty}((0,T)\times\Omega)}. \end{equation} Moreover, by construction, it results that the parabola \[
P(t,x)=u(\bar{t}^{\epsilon},\bar{x}^{\epsilon})-\frac1{\epsilon}(|t-\bar{t}^{\epsilon}|^2+|x-\bar{x}^{\epsilon}|^2) \] touches $u^{\epsilon}$ from below at $(\bar t, \bar x)$. This shows that the supconvolution is semiconvex in $(0,T)\times\Omega$. Such a property is particularity useful in order to pointwise evaluate some viscosity inequality related to subsolutions. Indeed, let us assume that $u^{\epsilon}$ is touched from above by a smooth function at $(\bar t, \bar x)$. Then, thanks to its semiconvexity property, we deduce that $u^{\epsilon}\in C^{1,1}(\bar t, \bar x)$, namely there exist $q \in\mathbb{R}^{N+1}$ and $C>0$ such that, in a neighborhood of $(\bar t, \bar x)$, \begin{equation}\label{c11}
\left|u^{\epsilon}(t,x)-u^{\epsilon}(\bar t, \bar x)-q \cdot\binom{t-\bar t}{x-\bar x}\right|\le C(|t-\bar{t}|^2+|x-\bar{x}|^2). \end{equation} This means that the time derivative and the fractional operator can be evaluated pointwise at $(\bar t, \bar x)$ (see Lemma \ref{pointsub} below).
Similarly, the {\it infconvolution} of a function $v\in \text{\rm LSC}((0,T)\times\Omega)\cap L^{\infty}((0,T)\times\Omega)$ is defined as \begin{equation}\label{infconv}
u_{\epsilon}(t,x)=\inf_{(\tau, y) \in (0,T)\times\Omega}\left\{u(\tau,y)+\frac1{\epsilon}(|x-y|^2+|t-\tau|^2)\right\}, \end{equation} and we let $(t_{\epsilon},x_{\epsilon})$ be the point where \begin{equation}\label{01-06tris}
u_{\epsilon}(t,x)=u(t_{\epsilon},x_{\epsilon})-\frac1{\epsilon}(|x-x_{\epsilon}|^2+|t-t_{\epsilon}|^2). \end{equation} The property of the infconvolution correspond to those of the supconvolution up to the trivial transformation $v_{\epsilon}=-(-v)^{\epsilon}$. Omitting further details for the sake of brevity, we limit ourselves in proving the following inequality on supconvolutions.
\begin{lemma}\label{pointsub} Let us assume \eqref{Sigmadef}, \eqref{contpar}, and \eqref{omegax} and that $u(t,x)$ is a viscosity subsolution to \eqref{parabolic} and let $u^{\epsilon}(t,x)$ be its supconvolution. If $u^{\epsilon}$ is touched from above by some smooth function at $(\bar t, \bar x)$, the following inequality holds in a classical sense \begin{equation}\label{13:34uno} \partial_tu^{\epsilon}(\bar t, \bar x)+h(\bar{t}^{\epsilon},\bar{x}^{\epsilon})u^{\epsilon}(\bar t, \bar x)+\mathcal{L}_{s} (\Omega(\bar{t}^{\epsilon},\bar{x}^{\epsilon}),u^{\epsilon}(\bar t, \bar x))\le f(\bar{t}^{\epsilon},\bar{x}^{\epsilon}), \end{equation} where $(\bar{t}^{\epsilon},\bar{x}^{\epsilon})$ satisfies \eqref{01-06bis}. \end{lemma} \begin{proof} Let us assume that $u^{\epsilon}$ is touched from above by a smooth $\varphi$ at $(\bar t,\bar x)$. Let us recall that there exists $(\bar{t}^{\epsilon},\bar{x}^{\epsilon})$ such that \[
u^{\epsilon}(\bar t,\bar x)=u^{\epsilon}(\bar{t}^{\epsilon},\bar{x}^{\epsilon})-\frac1{\epsilon}(|\bar{t}-\bar{t}^{\epsilon}|^2+|\bar{x}-\bar{x}^{\epsilon}|^2) \ \ \ \mbox{and} \ \ \ (\bar{t}^{\epsilon},\bar{x}^{\epsilon})\to(\bar t,\bar x) \ \ \ \mbox{as} \ \ \ \epsilon\to0. \] By definition of supconvolution we have that \[
u^{\epsilon}(t+\bar t-\bar{t}^{\epsilon},x+\bar x-\bar{x}^{\epsilon})\ge u(\tau,y)-\frac1{\epsilon}(|t+\bar t-\bar{t}^{\epsilon}-\tau|^2+|x+\bar x-\bar{x}^{\epsilon}-y|^2). \] Choosing $(\tau,y)=(t,x)$ it follows \[
u^{\epsilon}(t+\bar t-\bar{t}^{\epsilon},x+\bar x-\bar{x}^{\epsilon})\ge u(t,x)-\frac1{\epsilon}(|\bar t-\bar{t}^{\epsilon}|^2+|\bar x-\bar{x}^{\epsilon}|^2). \] Then, by defining \[
\bar{\varphi}(t,x)=\varphi(t+\bar t-\bar{t}^{\epsilon},x+\bar x-\bar{x}^{\epsilon})+\frac1{\epsilon}(|\bar t-\bar{t}^{\epsilon}|^2+|\bar x-\bar{x}^{\epsilon}|^2), \] we infer that $\bar{\varphi}$ touches from above $u$ at $(\bar{t}^{\epsilon},\bar{x}^{\epsilon})$. Since $u$ is a viscosity subsolution to \eqref{parabolic}, it follows that \begin{equation}\label{01-06} \partial_t\bar{\varphi}_{r}(\bar{t}^{\epsilon},\bar{x}^{\epsilon})+h(\bar{t}^{\epsilon},\bar{x}^{\epsilon})\bar{\varphi}_{r}(\bar{t}^{\epsilon},\bar{x}^{\epsilon})+\mathcal{L}_{s} (\Omega(\bar{t}^{\epsilon},\bar{x}^{\epsilon}),\bar{\varphi}_{r}(\bar{t}^{\epsilon},\bar{x}^{\epsilon}))\le f(t,x). \end{equation} Now notice that by construction of $\bar{\varphi}$ it results $\partial_t\bar{\varphi}_{r}(\bar{t}^{\epsilon},\bar{x}^{\epsilon})=\partial_t{\varphi}_{r}(\bar{t},\bar{x})$ and \[
\bar{\varphi}_{r}(\bar{t}^{\epsilon},\bar{x}^{\epsilon})-\bar{\varphi}_{r}(\bar{t}^{\epsilon},\bar{x}^{\epsilon}+z)= \varphi_{r}(\bar t,\bar x)-\varphi_{r}(\bar t,\bar x+z), \] that implies \[
\mathcal{L}_{s} (\Omega(\bar{t}^{\epsilon},\bar{x}^{\epsilon}),\bar{\varphi}_{r}(\bar{t}^{\epsilon},\bar{x}^{\epsilon}))=\int_{\tilde{\Omega}(\bar{t}^{\epsilon},\bar{x}^{\epsilon})}\frac{\bar{\varphi}_{r}(\bar{t}^{\epsilon},\bar{x}^{\epsilon})-\bar{\varphi}_{r}(\bar{t}^{\epsilon},\bar{x}^{\epsilon}+z)}{|z|^{N+2s}}dz \] \[
=\int_{\tilde{\Omega}(\bar{t}^{\epsilon},\bar{x}^{\epsilon})}\frac{\varphi_{r}(\bar t,\bar x)-\varphi_{r}(\bar t,\bar x+z)}{|z|^{N+2s}}dz=\mathcal{L}_{s} (\Omega(\bar{t}^{\epsilon},\bar{x}^{\epsilon}),{\varphi}_{r}(\bar{t},\bar{x})). \] Then, \eqref{01-06} becomes \[ \partial_t{\varphi}_{r}(\bar{t},\bar{x})+h(\bar{t}^{\epsilon},\bar{x}^{\epsilon})u^{\epsilon}(\bar{t},\bar{x})+\mathcal{L}_{s} (\Omega(\bar{t}^{\epsilon},\bar{x}^{\epsilon}),{\varphi}_{r}(\bar{t},\bar{x}))\le f(\bar{t}^{\epsilon},\bar{x}^{\epsilon}). \] Since $u^{\epsilon}$ is touched from above by a smooth function at $(\bar t,\bar x)$, we know that $u^{\epsilon}\in C^{1,1}(\bar t,\bar x)$ (see \eqref{c11}) and then $\partial_t{\varphi}_{r}(\bar{t},\bar{x})=\partial_tu^{\epsilon}(\bar{t},\bar{x})$. Recalling assumption \eqref{omegax} and since $(\bar{t}^{\epsilon},\bar{x}^{\epsilon})\to (\bar t,\bar x)\in (0,T)\times \Omega$, we deduce that there exists $\delta>0$ such that, for any $r<\delta$, we can decompose the nonlocal operator as follows \begin{align*} \mathcal{L}_{s}
(\Omega(\bar{t}^{\epsilon},\bar{x}^{\epsilon}),{\varphi}_{r}(\bar{t},\bar{x}))
&=\int_{\Sigma \cap B_r(0)}\frac{\varphi(\bar t,\bar x)-\varphi(\bar
t,\bar x+z)}{|z|^{N+2s}}dz\\
&\quad +\int_{\tilde{\Omega}(\bar{t}^{\epsilon},\bar{x}^{\epsilon})\setminus B_r(0)}\frac{u^{\epsilon}(\bar t,\bar x)-u^{\epsilon}(\bar t,\bar x+z)}{|z|^{N+2s}}dz. \end{align*} The integral on $\Sigma\cap B_r(0))$ is well defined converges to zero as $r\to0$, due to the smoothness of $\varphi$ and the symmetry of $\Sigma$. To deal with the second integral we apply Lemma \ref{continuity} with $x_n=\bar x^{\epsilon}$, $\Theta(x_n)=\Omega(\bar{t}^{\epsilon},x_n)\setminus B_{\frac1n}(x_n))$ and $\phi_{n}(\cdot)=\phi(\cdot)=u^{\epsilon}(\bar t,\cdot)$. We deduce that $\mathcal{L}_{s}(\Omega(\bar{t}^{\epsilon},\bar{x}^{\epsilon}),{\varphi}_{r}(\bar{t},\bar{x}))\to \mathcal{L}_{s} (\Omega(\bar{t}^{\epsilon},\bar{x}^{\epsilon}),u^{\epsilon}(\bar{t},\bar{x}))$ as $r\to0$. This completes the proof of the Lemma. \end{proof} A similar inequality holds for infconvolution. For the sake of later reference, we state it here below without proof. This can be obtained by straightforwardly adapting the argument of Lemma \ref{pointsub}. \begin{lemma}\label{pointsup} Let us assume that $v(t,x)$ is a viscosity supersolution to \eqref{parabolic} and let $v_{\epsilon}(t,x)$ be its infconvolution. If $v_{\epsilon}$ is touched from below by some smooth function at $(\bar t, \bar x)$, the following inequality holds in a classical sense \begin{equation}\label{13:34due} \partial_tv_{\epsilon}(\bar t, \bar x)+h(\bar{t}_{\epsilon},\bar{x}_{\epsilon})v_{\epsilon}(\bar t, \bar x)+\mathcal{L}_{s} (\Omega(\bar{t}_{\epsilon},\bar{x}_{\epsilon}),v_{\epsilon}(\bar t, \bar x))\le f(\bar{t}_{\epsilon},\bar{x}_{\epsilon}), \end{equation} where $(t_{\epsilon},x_{\epsilon})$ satisfies \eqref{01-06tris}. \end{lemma} In the following Lemma we eventually state that the difference of a super- and a subsolution is still a supersolution. \begin{lemma}[Difference]\label{differenza} Let us consider $h_1(t,x), h_2(t,x)$ that satisfy \eqref{alphap}, $\{\Omega_1(t,x)\}, \{\Omega_2(t,x)\}$ that satisfy \eqref{contpar},\eqref{omegax} and two functions $u\in USC_b((0,T)\times\Omega), v\in LSC_b((0,T)\times\Omega)$ that solve in the viscosity sense \[ \partial_tu(t,x)+h_1(t,x)u(t,x)+\mathcal{L}_{s} (\Omega_1(t,x),u(t,x))\le f_1(t,x) \qquad \mbox{in } (0,T)\times\Omega \] \[ \partial_tv(t,x)+h_2(t,x)v(t,x)+\mathcal{L}_{s} (\Omega_2(t,x),v(t,x))\ge f_2(t,x) \qquad \mbox{in } (0,T)\times\Omega, \] respectively. Then $w=u-v$ solves in the viscosity sense \[
\partial_tw(t,x)+h_1(t,x)w(t,x)+\mathcal{L}_{s} (\Omega_1(t,x),w(t,x))\le \tilde{f}(x,t) \qquad \mbox{in } (0,\infty)\times\Omega \] where \[
\tilde{f}(x,t)=f_1({t} ,{x} )-f_2({t} ,{x} )+M|h_1({t} ,{x} )-h_2({t} ,{x} )|+ 2M\int_{ |z|\ge \frac{\zeta}{2}d(\bar x) }\frac{|\chi_{\tilde{\Omega}_1({t} ,{x} )}-\chi_{\tilde{\Omega}_2({t} ,{x} )}|}{|z|^{N+2s}}dz,\color{black} \] \end{lemma}
with $M=\max\{\|u\|_{L^{\infty}((0,T)\times\Omega)},\|v\|_{L^{\infty}((0,T)\times\Omega)}\}$. \begin{proof} Recalling definitions \eqref{supcon} and \eqref{infconv}, let us consider the function $w^{\epsilon}(t,x)=u^{\epsilon}(t,x)-v_{\epsilon}(t,x)$ and assume that it is touched from above by a $\varphi\in C^2((0,\infty)\times\Omega)$ at point $ (\bar{t}, \bar{x})$. This means that \[ u^{\epsilon}(\bar{t}, \bar{x})-v_{\epsilon}(\bar{t}, \bar{x})=\varphi(\bar{t}, \bar{x}) \ \ \ \mbox{and} \ \ \ u^{\epsilon}-v_{\epsilon}\le \varphi
\ \ \mbox{in} \ \ \Omega. \]
This latter fact, together with the semiconvexity property of both
$u^{\epsilon}$ and $-v_{\epsilon}$, implies that $u^{\epsilon}$ and
$-v_{\epsilon}$ are $C^{1,1}(\bar{t},\bar{x})$ (see
\eqref{c11}). We are hence in the position of applying
Lemmas \ref{pointsub} and \ref{pointsup} and evaluating the
inequalities satisfied by $u^{\epsilon}$ and $v_{\epsilon}$ pointwise. We have that \[ \partial_tu^{\epsilon} (\bar{t}, \bar{x})+h_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon})u^{\epsilon} (\bar{t}, \bar{x})+\mathcal{L}_{s}(\Omega_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon}),u^{\epsilon} (\bar{t}, \bar{x}))\le f_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon}), \] and that \[ \partial_tv_{\epsilon} (\bar{t}, \bar{x})+h_2(\bar{t}_{\epsilon},\bar{x}_{\epsilon})v_{\epsilon} (\bar{t}, \bar{x})+\mathcal{L}_{s}(\Omega_2( \bar{t}_{\epsilon},\bar{x}_{\epsilon}), v_{\epsilon} (\bar{t}, \bar{x}))\ge f_2(\bar{t}_{\epsilon},\bar{x}_{\epsilon}). \] Recalling the ordering assumption between $w^{\epsilon}$ and $\varphi$ and combining the two inequalities above, we infer that, for $\epsilon$ small enough, \begin{align}
\label{01-06tris} &\partial_t \varphi_r(\bar t,\bar
x)+h_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon})\varphi_r(\bar t,\bar
x)+ \mathcal{L}_s
(\Omega_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon}), \varphi_r(\bar
t,\bar x))\\[2mm]
&\nonumber \quad \le \partial_t w^{\epsilon}(\bar t,\bar
x)+h_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon})w^{\epsilon}(\bar t,\bar
x)+\mathcal{L}_s(\Omega_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon}),
w^{\epsilon}(\bar t,\bar x))\\[2mm]
&\nonumber \quad \le
f_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon})-f_2(\bar{t}_{\epsilon},\bar{x}_{\epsilon})+M|h_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon})-h_2(\bar{t}_{\epsilon},\bar{x}_{\epsilon})|\\
&\nonumber \qquad +2M\int_{\mathbb{R}^N}\frac{|\chi_{\tilde{\Omega}_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon})}-\chi_{\tilde{\Omega}_2(\bar{t}_{\epsilon},\bar{x}_{\epsilon})}|}{|z|^{N+2s}}dz. \end{align} Let us stress that, for $\epsilon$ small enough, the integral term in the right hand side above is finite. Indeed, thanks to the assumption \eqref{omegax} and since $(\bar{t}^{\epsilon},\bar{x}^{\epsilon})\to (\bar{t},\bar{x})$ and $(\bar{t}_{\epsilon},\bar{x}_{\epsilon})\to (\bar{t},\bar{x})$ as $\epsilon\to 0$, then \[ \exists \ \epsilon_0>0 \ : \ \forall \epsilon\in(0,\epsilon_0) \ \ \ B_{\frac{\zeta}{2}d(\bar x)}\cap \tilde{\Omega}(\bar{t}^{\epsilon},\bar{x}^{\epsilon})= B_{\frac{\zeta}{2}d(\bar x)}\cap \tilde{\Omega}(\bar{t}_{\epsilon},\bar{x}_{\epsilon})=B_{\frac{\zeta}{2}d(\bar x)}\cap \Sigma. \] This implies that \begin{align*}
&\int_{\mathbb{R}^N}\frac{|\chi_{\tilde{\Omega}_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon})}-\chi_{\tilde{\Omega}_2(\bar{t}_{\epsilon},\bar{x}_{\epsilon})}|}{|z|^{N+2s}}dz=2M\int_{|z|\ge
\frac{\zeta}{2}d(\bar
x)}\frac{|\chi_{\tilde{\Omega}_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon})}-\chi_{\tilde{\Omega}_2(\bar{t}_{\epsilon},\bar{x}_{\epsilon})}|}{|z|^{N+2s}}dz\\
&\qquad \le \left(\frac{2}{\zeta d(\bar x)}\right)^{N+2s}|\tilde{\Omega}_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon})\triangle\tilde{\Omega}_2(\bar{t}_{\epsilon},\bar{x}_{\epsilon})|. \end{align*} Since \eqref{01-06tris} is true any time that $w^{\epsilon}$ is touched from above by a smooth $\varphi$ at some point in $(0,T)\times\Omega$, we can conclude that $w^{\epsilon}$ solves in the viscosity sense \[ \partial_t w^{\epsilon}( t,
x)+h_{\epsilon}({t}^{\epsilon},{x}^{\epsilon})w^{\epsilon}( t,
x)+\mathcal{L}_s(\Omega_1({t}^{\epsilon},{x}^{\epsilon}),
w^{\epsilon}( t, x)) \le f^{\epsilon}( t, x), \] where \begin{align*} &h_{\epsilon}(t,x)=h_1({t}^{\epsilon},{x}^{\epsilon}),\\ &\Omega_{\epsilon}(t,x)=\Omega_1({t}^{\epsilon},{x}^{\epsilon}),\\
&f^{\epsilon}(\bar t, \bar x)= f_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon})-f_2(\bar{t}_{\epsilon},\bar{x}_{\epsilon})+M|h_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon})-h_2(\bar{t}_{\epsilon},\bar{x}_{\epsilon})| +2M\int_{|z|\ge \frac{\zeta}{2}d(\bar x)}\frac{|\chi_{\tilde{\Omega}_1(\bar{t}^{\epsilon},\bar{x}^{\epsilon})}-\chi_{\tilde{\Omega}_2(\bar{t}_{\epsilon},\bar{x}_{\epsilon})}|}{|z|^{N+2s}}dz \end{align*} and the point ${t}^{\epsilon},{x}^{\epsilon}$ is related to $( t, x)$ through \eqref{01-06bis} and \eqref{control}. Thanks to Lemma \ref{stability}, we can pass to the limit in \eqref{01-06tris} as $\epsilon\to 0$ and obtain the desired result. \end{proof}
For later purpose we also explicitly state an elliptic version of Lemma \ref{differenza}. \begin{cor}\label{ellipticdif} Assume \eqref{alpha}-\eqref{ostationary}, that $f_1,f_2\in C(\Omega)$ satisfy \eqref{fcon} and that $u\in USC_b(\Omega)$, $v\in LSC_b(\Omega)$ solve \[ h(x)u(x)+\mathcal{L}_{s} (\Omega(x),u(x))\le f_1(x) \qquad \mbox{in } \Omega \] \[ h_2(x)v(x)+\mathcal{L}_{s} (\Omega(x),v(x))\ge f_2(x) \qquad \mbox{in } \Omega, \] respectively. Then $w=u-v$ solves \[ \partial_tw(x)+h(x)w(x)+\mathcal{L}_{s} (\Omega(x),w(x))\le f_1(x)-f_2(x) \qquad \mbox{in } \Omega. \] \end{cor} \begin{remark}
For the sake of brevity, we do not provide a proof of Corollary \ref{ellipticdif}, see Remark \ref{ellipticparabolic}. \end{remark}
\subsection{Regularity}
By adapting the regularity theory for fully nonlinear
integro-differential equations from \cite[Sec. 14]{caff} we can
prove the following.
\begin{teo}[H\"older regularity]\label{regelliptic} Let us assume \eqref{alpha}-\eqref{Sigmadef}, that $f\in C(\Omega)\cap\elle{\infty}$, and that $u\in C(\Omega)\cap\elle{\infty}$ solves in the viscosity sense \[
h(x)u(x)+\mathcal{L}_{s}(\Omega(x), u(x) )= f(x) \quad \mbox{in } \Omega. \] Then, for any open sets $\Omega'\subset\subset \Omega''\subset\subset\Omega$, it follows that \[
\|u\|_{C^{\gamma}(\Omega')}\le \tilde C, \]
where $\gamma \in (0,1)$ and $\tilde C=\tilde C(\|f\|_{\elle{\infty}}, s, \zeta, d(\Omega'', \Omega'),\|u\|_{\elle{\infty}})$. \end{teo} \begin{proof} We claim that \[
\mathcal{L}_{s}(\Sigma({x}),u(x))= \tilde f(x) \quad \mbox{in } \Omega \]
in the viscosity sense, where $\Sigma({x})=\Sigma+x$ and $\tilde f\in
C(\Omega)$ is a suitable function such that $\tilde f(x)\approx
d(x)^{-2s}$ close to $\partial\Omega$. Once such a property is
verified, the proof of the Lemma follows from \cite[Theorem
4.6]{mou}. See also \cite[Theorem 7.2]{schwabsil}, were the parabolic
problem is treated.
In order to prove the claim, we follow the ideas of \cite[Sec. 14]{caff}. Let us assume that, any time $u$ is touched from above with a smooth function at some $x\in \Omega$, $u$ belongs to $C^{1,1}( x)$. Using Lemma \ref{continuity}, we deduce that \[ h({x})u( x)+\mathcal{L}_{s} (\Omega({x}),u( x))\le f(x) \] pointwise for any such a $x\in \Omega$. Thanks to assumption \eqref{ostationary}, the nonlocal operator can be estimated as follows \begin{align*}
& \mathcal{L}_{s}(\Omega({x}),
u(x))=\int_{\Sigma\cap B_{\zeta d({x}^{\epsilon})}}\frac{ u( {x} )- u(x
+z )}{|z|^{N+2s}}dz+\int_{\tilde{\Omega}(x)\setminus B_{\zeta d(x)}}\frac{ u(
x )- u( x+z )}{|z|^{N+2s}}dz\\
&\quad =\int_{\Sigma}\frac{ u( x )- u( x+z
)}{|z|^{N+2s}}dz+\int_{\tilde{\Omega}(x)\setminus B_{\zeta d(x)}}\frac{ u(
x )- u( x+z )}{|z|^{N+2s}}dz-\int_{\Sigma\setminus B_{\zeta d(x)}}\frac{ u(
x )- u( x+z )}{|z|^{N+2s}}dz\\
&\quad = \mathcal{L}_{s}(\Sigma({x}),
u(x)). \end{align*} Let us set \[ \tilde f(x)=f(x)-\int_{\tilde{\Omega}(x)\setminus B_{\zeta d(x)}}\frac{ u(
x )- u( x+z )}{|z|^{N+2s}}dz+\int_{\Sigma\setminus B_{\zeta d(x)}}\frac{ u(
x )- u( x+z )}{|z|^{N+2s}}dz. \] This proves that \[
\mathcal{L}_{s}(\Sigma({x}),
u(x)) \le \tilde f(x), \] assuming that $u$ belongs to $C^{1,1}(x)$. Let us apply this argument to the sup convolution $u^{\epsilon}$. By definition of the sup convolution, we recall that, any time $u^{\epsilon}$ is touched from above by a smooth function $\varphi$ at $x\in \Omega$, then $u^{\epsilon}\in C^{1,1}(x)$. Moreover, thanks to Theorem \ref{pointsub}, we have that \[ h(x^{\epsilon})u(x)+\mathcal{L}_{s}(\Omega(x^{\epsilon}), u(x) )\le f(x^{\epsilon}) \] point wise for any such a $x\in\Omega $. Then, thanks to the argument above, it follows that \[
\mathcal{L}_{s}(\Sigma({x^{\epsilon}}),
u^{\epsilon}(x)) \le \tilde f(x^{\epsilon}). \] Eventually, thanks to the stability property of viscosity solution (see Lemma \ref{stability}), we can pass to the limit in the inequality above to conclude that \[
\mathcal{L}_{s}(\Sigma({x}),
u(x)) \le \tilde f(x), \] in the viscosity sense. Similarly, one can check that \[
\mathcal{L}_{s}(\Sigma({x}),
u(x))\ge \tilde f(x), \] and the proof of the initial claim follows. \end{proof}
\subsection{Equivalence with the fractional laplacian} We now present two technical lemmas, shedding light on the relation between the operator in \eqref{10:17} and the classical
fractional laplacian.
\begin{lemma}[Equivalence]\label{acca} The function defined in \eqref{15:15} can be equivalently written as \[
a(x)=\Gamma(2s+1)\int_{\tilde{S}(x)^c}\frac{1}{|z|^{N+2s}}, \] where $S(x)$ is the largest star-shaped subset of $\Omega$ centered at $x$ and $\tilde{S}(x)=x-S(x)$.
If $\Omega$ is convex, the fractional laplacian $(-\Delta)_s$ defined in \eqref{decomposition} is equivalent to the elliptic operator defined in \eqref{start}, as $$\Gamma(2s+1) (-\Delta)_s\varphi(x) = a(x) \varphi (x)+ (-\Delta)^\star_s\varphi (x)$$ on suitably smooth function $\varphi$. \end{lemma} \begin{proof} We firstly notice that \begin{align*}
a(x)&=\int_{\mathbb{R}^N}\frac{1}{|y|^{N+2s}}e^{-\frac{d(x,\sigma(y))}{|y|}}dy\\ & =\int_{\omega^{N-1}}\int_0^{\infty}\frac{1}{\rho^{1+2s}}e^{-\frac{d(x,\sigma)}{\rho}}d\rho
d\sigma=\int_{\omega^{N-1}}\frac{1}{d(x,\sigma)^{2s}}d\sigma
\int_0^{\infty}\frac{1}{r^{1+2s}}e^{-\frac{1}{r}}dr\\ & =\int_{\omega^{N-1}}\frac{1}{d(x,\sigma)^{2s}}d\sigma\int_0^{\infty}t^{2s-1}e^{-t}dt=\Gamma(2s)\int_{\omega^{N-1}}\frac{d\sigma}{d(x,\sigma)^{2s}} \end{align*} where we recall that $d(x,\sigma(y))$ denotes the distance between $x$ and the first point reached on $\partial \Omega$ by the ray from
$x$ with direction $\sigma(y) = y/|y|$. On the other hand, we have that \[
\int_{\tilde{S}(x)^c}\frac{1}{|z|^{N+2s}}dz=\int_{\omega^{N-1}}\int_{d(x,\sigma)}\rho^{-1-2s}d\rho d\sigma=\frac{1}{2s}\int_{\omega^{N-1}}\frac{d\sigma}{d(x,\sigma)^{2s}}. \] The conclusion follows from the fact that $\Gamma (2s+1)=\Gamma(2s)2s$.\\
Assume now that $\Omega$ is convex. Then $S(x)\equiv\Omega$ for any $x\in\Omega$ and \[
a(x)=\Gamma(2s+1)\int_{\{x-\Omega\}^c}\frac{1}{|z|^{N+2s}}dz=\Gamma(2s+1)\int_{\Omega^c}\frac{1}{|x-y|^{N+2s}}dy. \] Then, recalling \eqref{start}, we have that, for any $\varphi\in C^{\infty}_c(\Omega)$, \[
a(x)\varphi(x)+(-\Delta)_{s}^{\star}\varphi(x)=\Gamma(2s+1)\left[\int_{\Omega^c}\frac{1}{|x-y|^{N+2s}}dy \ \varphi(x)+p.v.\int_{\Omega}\frac{\varphi(x)-\varphi(y)}{|x-y|^{N+2s}}dy\right] \] \[
=\Gamma(2s+1)\ p.v.\int_{\mathbb{R}^N}\frac{\varphi(x)-\varphi(y)\chi_{\Omega}}{|x-y|^{N+2s}}dy=\Gamma(2s+1)(-\Delta)_ {s}\varphi (x).\qedhere \] \end{proof}
We now introduce a class of domains for which the function $k(x)$ defined in \eqref{decomposition} satisfies the bounds \eqref{killing}. To this aim, we assume that the complement $\Omega^c $ satisfies a uniform positive density condition, namely that there exists $\rho_0>0$ and $\kappa >0 $ such that \begin{equation}\label{density}
|B_{\rho}(\bar x)\cap \Omega^c|\ge \kappa |B_{\rho}(\bar x)| \ \ \ \mbox{for all } \bar x\in\partial\Omega \ \mbox{ and } \ \rho\in (0,\rho_0). \end{equation} Let us stress that \eqref{density} is weaker than the exterior cone condition.
\begin{lemma}[Bounds on $h$]\label{hbeha} Let $\Omega$ be an open bounded set of $\mathbb R^N$ with $\Omega^c$ satisfying \eqref{density}. Then, the function \[
h (x)=\int_{\Omega^c}\frac{1}{|x-y|^{N+2s}}dy \] satisfies the bounds \eqref{killing}. \end{lemma} \begin{proof} To start with, notice that, we have $\Omega^c\subset B_{d(x)}(x)^c$ for all $x\in\Omega$. Then, \begin{equation}\label{easypart}
h(x)\le \int_{B_{d(x)}(x)^c}\frac{1}{|x-y|^{N+2s}}dy=\int_{|z|\ge d(x)}\frac{1}{|z|^{N+2s}}dz=\frac{\omega_N}{2s}\frac1{d(x)^{2s}} \end{equation} whence the upper bound in \eqref{killing}.
To prove the lower bound let us take $x\in\Omega$ such that $d(x)\le \rho_0$. We then have \[
h(x)=\int_{\Omega^c}\frac{1}{|x-y|^{N+2s}}d y \ge
\int_{\Omega^c\cap B_{d(x)}(\bar x)}\frac{1}{|x-y|^{N+2s}}d y \]
where $\bar x\in\partial\Omega$ is such that $d(x)=|x-\bar x|$. Using that if $y\in B_{d(x)}(\bar x)$ then $|x-y|\le |x|+|y|\le 2d(x)$ and taking advantage of \eqref{density} (recall that $d(x)\le \rho_0$), the inequality above becomes \[
h(x)\ge\frac{1}{(2d(x))^{N+2s}}|\Omega^c\cap B_{d(x)}(\bar x)|\ge C\frac{1}{d(x)^{2s}}.\qedhere \] \end{proof}
\section{Existence and uniqueness}
The existence of viscosity solutions follows by applying the classical Perron method. We give here full details of this construction in the parabolic case, hence proving Theorem \ref{existence}. The proof of Theorem \ref{teide} follows the same lines, being actually simpler. We comment on it at the end of the section.
As a first step toward the implementation of the Perron method, we start by providing a suitable barrier for the elliptic problem.
\begin{lemma}[Barriers]\label{barriercone} Let us assume that the set valued function $x\to\Omega(x)$ satisfies \eqref{ostationary}, \eqref{Sigmadef}. Then there exists a positive $\bar \eta=\bar \eta(N,s,\zeta,\alpha)$ such that, for any $\eta\in(0,\bar \eta]$, the function $u_{\eta}(x)=d(x)^{\eta}$ solves the inequality \begin{equation}\label{sbomba} \frac{\alpha}{d(x)^{2s}}u_{\eta}(x)+\mathcal{L}_s(\Omega(x),u_{\eta} (x) )\ge \frac{\alpha}{2} d(x)^{\eta-2s} \ \ \ \mbox{in} \ \ \ \Omega \end{equation} in the viscosity sense. \end{lemma} \begin{remark}\label{barpar}\rm Notice that under assumptions \eqref{Sigmadef} and \eqref{omegax}, the function $u_{\eta}$, with $\eta<\bar \eta(N,s,\zeta_T,\alpha)$ also satisfies in the viscosity sense \[ \partial_tu_{\eta}(x)+ \frac{\alpha}{d(x)^{2s}}u_{\eta}(x)+\mathcal{L}_s(\Omega(t,x),u_{\eta} (x) )\ge \frac{\alpha}{2} d(x)^{\eta-2s} \ \ \ \mbox{in} \ \ \ (0,T)\times\Omega, \] since for all $t\in(0,\infty)$ the set valued function $x\to\Omega_t(x)=\Omega(t,x)$ satisfies \eqref{ostationary} with $\zeta_T$. If we moreover assume \eqref{threeprime}, the barrier is uniform in time\color{black}. \end{remark} \begin{proof}[Proof of Lemma \ref{barriercone}] Fix $x\in\Omega$ and assume that there exists $\varphi\in C^2(\Omega)$ such that $u_{\eta}-\varphi$ has a minimum in $\Omega$ at $x$ and that $u_{\eta}(x)=\varphi(x)$. We have to check (see Lemma \ref{def2}) that for any $B_r(x)\subset \Omega$ \begin{equation}\label{11:32} \frac{\alpha}{d(x)^{2s}}u_{\eta}(x)+\mathcal{L}_s(\Omega(x),\varphi_r(x))\ge \frac{\alpha}{2} d(x)^{\eta-2s}. \end{equation} Since $\varphi$ touches $u_{\eta}$ from below at $x$, we deduce \cite[Prop. 2.14]{BC} that there exists a unique
$\bar x\in\partial \Omega$ such that $d(x)=|x-\bar x|$. To simplify notation, from now on we use a system of coordinates that is centered at $\bar x$, so that $d(x)=|x|$. Notice that \[
\varphi(x)-\varphi(x+z)\ge u_{\eta}(x)-u_{\eta}(x+z)\ge (|x|^{\eta}-|x+z|^{\eta}) \ \ \ \mbox{for all} \ \ \ z\in\tilde{\Omega}(x), \] the last inequality following from the fact that $d(x+z)\le
|x+z|$. We then have that \[
\mathcal{L}_s(\Omega(x),\varphi_r(x))\ge \int_{\tilde{\Omega}(x)}\frac{|x|^{\eta}-|x+z|^{\eta}}{|z|^{N+2s}}dz \] \[
=\left(\int_{ \{|z|\le \zeta
|x|\}\cap\Sigma }\frac{|x|^{\eta}-|x+z|^{\eta}}{|z|^{N+2s}}dz+\int_{\{|z|>
\zeta
|x|\}\cap\tilde{\Omega}(x)}\frac{|x|^{\eta}-|x+z|^{\eta}}{|z|^{N+2s}}dz\right)=: I_1+I_2, \]
where we have used that $\tilde{\Omega}(x)\cap B_{\zeta
|x|}=\Sigma\cap B_{\zeta|x|}$ (see assumption \eqref{ostationary})\color{black}. Thanks to Taylor expansion, we get that \begin{align*}
|x+z|^{\eta}&=|x|^{\eta}+\eta |x|^{\eta-2}x\cdot z
+\frac12\left[\eta(\eta-2)|\xi|^{\eta-4}|\xi \cdot z|^2+\eta
|\xi|^{\eta-2}|z|^2 \right]\\ &
\le |x|^{\eta}+\eta |x|^{\eta-2}x\cdot z+2^{\eta-3}\eta |x|^{\eta-2}|z|^2 \ \ \ \mbox{ for } \ \ |z|\le \zeta |x| \end{align*} with $\xi= x+t z$ for some $t\in(0,1)$ and the
inequality follows from neglecting a negative term and from the fact that $|\xi|\ge (1-\zeta)|x|\ge \frac12 |x|$. This implies that \begin{align}\label{16:42bis}
I_1&\ge-\int_{\{|z|\le \zeta |x|\}\cap\Sigma}\left(\eta |x|^{\eta-2}x\cdot z+\eta
2^{\eta-3}
|x|^{\eta-2}|z|^2\right)\frac{1}{|z|^{N+2s}}dz\\ &
=- \eta 2^{\eta-3} |x|^{\eta-2}\int_{\{|z|\le \zeta |x|\}\cap\Sigma}|z|^{2-N-2s}dz\ge- \eta C|x|^{\eta-2s}. \nonumber \end{align}
where, in the second line, we have used that the set $\{|z|\le \zeta |x|\}\cap\Sigma$ is radially symmetric and that the first order term of the expansion vanishes in the principal value sense\color{black}. On the other hand, one has that \begin{equation}\label{16:43bis}
I_2\ge|x|^{\eta-2s}\int_{\{|y|\ge \zeta\}}\frac{1-(1+|y|)^{\eta}}{|y|^{N+2s}}dy. \end{equation}
Combining this last inequality with \eqref{16:42bis} and \eqref{16:43bis}, we obtain that \[ \mathcal{L}_s(\Omega(x),\varphi_r(x))\ge- g(\eta)d(x)^{\eta-2s}, \]
where $$g(\eta)=C\left(\eta+\int_{\{|y|\ge \zeta\}}\frac{1-(1+|y|)^{\eta}}{|y|^{N+2s}}dy\right).$$ Notice that, thanks to the Lebesgue Dominated Convergence Theorem, the integral in the brackets above goes to zero as $\eta\to0$. It follows that \[ \frac{\alpha}{
d(x)^{2s}}u_{\eta}(x)+\mathcal{L}_s(\Omega(x),\phi_r(x))-f(x)\ge (\alpha-g(\eta))d(x)^{\eta-2s}. \] At this point it is enough to chose $\eta\le\bar \eta$ satisfying $ \alpha-g(\bar \eta)= {\alpha}/2$ in order to conclude the proof. \end{proof}
Let us now provide a comparison principle for equation \eqref{parabolic}. This relies on Lemma \ref{differenza},
which is in turn based on the regularization of sub/super-solutions through sup/inf convolution.
\begin{lemma}[Comparison] \label{timecomp} Assume \eqref{Sigmadef}, \eqref{contpar}-\eqref{alphap}, that $T\in (0,\infty)$, that $u(t,x)$ and $v(t,x)$ are sub- and supersolutions to \eqref{parabolic}, respectively, that they are ordered on the boundary, namely $u\le v$ on $ (0,T)\times\partial\Omega $, and that $u(0,\cdot)\le v (0,\cdot)$ on $\Omega$. Then, \begin{equation} u\le v \ \ \mbox{in} \ \ (0,T)\times\Omega . \end{equation} \end{lemma} \begin{proof} Given $\delta>0$, let us introduce the function $u_{\delta}(t,x)=u(t,x)-\frac{\delta}{T-t}$ and notice that it is a viscosity subsolution to \eqref{parabolic}, namely, \[ \partial_t u_{\delta}(t,x) +h(t,x)u_{\delta} (t,x) +\mathcal{L}_{s}(\Omega(t,x), u_{\delta} (t,x) )- f(t,x)\le -\frac{\delta}{T^2}<0. \] We firstly show that $u_{\delta}\le v$, for any $\delta>0$, and then conclude the proof by taking the limit as $\delta$ goes to $0$. Using Lemma \ref{differenza}, we deduce that $w=u_{\delta}-v$ solves in the viscosity sense \[ \partial_t w(t,x) +h(t,x)w(t,x)+\mathcal{L}_{s}(\Omega(t,x), w (t,x) )\le0 \ \ \ \mbox{in} \ \Omega. \] Let us assume by contradiction that $\sup_{(0,T)\times\Omega}w= M >0$. Due to the ordering assumption on the parabolic boundary $ (0,T)\times\partial\Omega $ and on the initial conditions, and the behavior of $u_{\delta}$ as $t\to T^-$, $M$ is attained inside at $(\bar t, \bar x)\in(0,T)\times\Omega$. This implies that the constant function $M$ touches from above $w$ at
the point $(\bar t, \bar x)$, and then it is an admissible test function for $w$ to be a subsolution. It follows that \[ \frac{\alpha}{d^{2s}(\bar x)}M\le h(\bar t, \bar x)M\le 0, \] that is clearly a contradiction. Then $u_{\delta}-v=w\le 0$ for any $\delta>0$ and the assertion follows. \color{black} \end{proof}
\begin{cor}[Elliptic comparison] \label{ellcomp} Assume \eqref{alpha}-\eqref{Sigmadef} that $u(x)$ and $v(x)$ are sub- and supersolutions to \eqref{10:17}, respectively, and that $u\le v$ on $\partial\Omega$. Then, \begin{equation} u\le v\ \ \mbox{in} \ \ \Omega . \end{equation} \end{cor}
The proof of this corollary can be easily deduced from that of Corollary \ref{ellipticdif} and we omit the details for the sake of brevity.
We are now ready to present a first existence result, which relies on the possibility of finding suitable barriers for the parabolic problem. We will later check that such barriers can be easily obtained from Lemma \ref{barriercone}.
\begin{teo}[Existence, given barriers] \label{perronpar}Assume \eqref{Sigmadef}, \eqref{contpar}-\eqref{alphap}. Let $T \in (0,\infty) $ and $\underline l(t,x)$ and $\overline l(t,x)$ be sub- and supersolution to \eqref{parabolic}, respectively, with $\underline l=\overline l=0$ on $(0,T) \times \partial \Omega$. Then, for any $u_0\in C(\Omega)$ such that $\underline l(0,x)\le u_0(x) \le \overline l(0,x)$ for all $x\in\Omega$, problem \eqref{parabolic} admits a unique viscosity solution. \end{teo}
\begin{proof} We aim at applying Perron's method. Let us set \begin{align*}
A&=\Big\{w\in \text{\rm USC}(\Omega\times(0,T)) \ : \ \underline l(t,
x)\le w(t,x)\le \overline l(t, x) \ \mbox{for $(t,
x) \in (0,T) \times \partial \Omega$, } \\ &\hspace{25mm} w \ \mbox{is a subsolution to \eqref{parabolic}, and } w(x,0)\le u_0(x) \Big\}. \end{align*} Since $\overline l \in A\not = \emptyset$, we can set \[ u(t,x)=\sup_{w\in A} w(t,x). \] By definition, it follows \color{black} that for any $(\bar t, \bar x)\in(0,T)\times\Omega$ there exists a sequence $\{v_n\}\subset A$ that $\Gamma-$converges to the uppersemicontinuous envelop $u^*$ at $(\bar t, \bar x)$. We \color{black} can use Lemma \ref{stability} to show that $u^*$ is a subsolution to the equation in \eqref{parabolic}. Moreover $u^*(\cdot,0)\le u_0(\cdot)$ in $\Omega$ and $u^*(t,\cdot)\le 0$ on $\partial\Omega$ for any $t\in(0,T)$. In fact, \color{black} assume by contradiction that there exists $\bar x \in \Omega$ such that $u^*(x,0)- u_0(x)=\xi>0$. This would mean that there exist $\{x_n\}\subset\Omega$, $\{t_n\}\subset [0,T)$ and $\{w_n\}\subset A$ such that \[ x_n\to \bar x, \ \ \ t_n\to 0 \ \ \ \mbox{and} \ \ \ w_n(x_n,t_n)\to u_0(\bar x)+\xi, \] that is in contradiction with the definition of $u$. Similarly we check that $u^*(t,\cdot)\le 0$ on $\partial\Omega$. This implies that $u^*\in A$ and, by definition of $u$, we get that $u=u^*$
Now we claim that the lower-semicontinuous envelope \color{black} $u_*$ is a supersolution to \eqref{parabolic} and that $u_*(x,0)\ge u_0(x)$ for all $x\in\Omega$. Once the claim is proved, we can apply the comparison principle of Lemma \ref{timecomp} to the subsolution $u$ and the \color{black} supersolution $u_*$ to infer that \[
u\le u_* . \] This implies that $u_*=u^*=u$ is a viscosity solution to \eqref{parabolic} that satisfies the boundary and initial conditions \color{black} in the classical sense.\\
Let us hence prove that $u_*$ is a supersolution with $u_*(x,0)\geq u_0(x)$. \color{black}
By contradiction, we assume there exists $\phi\in C^2( (0,T)\times\Omega )$ such that $u_*-\phi$ has a strict global minimum at $(t_0,x_0)$, $u_*(t_0,x_0)=\phi(t_0,x_0)$ and \begin{equation}\label{16:03}
\partial_t \phi(t_0,x_0)+ h(t_0,x_0)u_*(t_0,x_0)+\int_{\Omega(t_0,x_0)} \frac{\phi(t_0,x_0)-\phi(z,t_0)}{|x_0-z|^{N+2s}}dz<f(t_0,x_0). \end{equation} This means that there exists $\epsilon>0$ such that the function \[ F(t,x)=\partial_t \phi(t,x) h(x)u_*(t,x)+h(t,x)\phi(t,x)+\mathcal{L}_{s} (\Omega(t,x),\phi (t,x) \color{black})-f(t,x) \] satisfies $F(t_0,x_0)=-\epsilon$. Since such a function is continuous at $(t_0,x_0)$, \color{black} there exists $r>0$ such that $F(t,x)<-\frac{\epsilon}{2}$ for all $(t,x)\in
\overline{B_r(t_0,x_0)}$ where $B_{r}(t_0,x_0)=\{(|t- t_0|^2+ |x-
x_0|^2)^{\frac12}<r\}\subset (0,T)\times\Omega$. Let us define \color{black} \[ \delta_1=\inf_{x\in\Omega \setminus B_{r}(t_0,x_0)} (v- \phi)(t,x)>0, \ \ \ \ \ \ \delta_2=\frac{\epsilon}{4 \sup_{x\in {B_r(x_0)}}h(x)}, \] and set $\delta=\min\{\delta_1,\delta_2\}$. With this choice of $r$ and $\delta$ we define \[ V = \begin{cases} \max\{v ,{\phi}+\delta\} \qquad & \mbox{in } B_{r}(t_0,x_0),\\
v & \mbox{otherwise}\, . \end{cases} \] Notice \color{black} that, since $v$ is upper semincontinuous, \color{black} the set $\{v -{\phi}-\delta<0\}$ is open and nonempty.
(since, by definition of lower semicontinuous envelop, there exists a sequence $x_n\to x_0$ such that $v(z_n)\to u_*(x_0)=\phi(x_0)$ as $n\to\infty$). Moreover $\{v -{\phi}-\delta<0\}\subset B_{r}(t_0,x_0) $ thanks to the choice of $\delta_1$. We want to prove that $V $ is a subsolution. Let us consider now $\psi\in C^2( (0,T)\times\Omega )$ such that $V -\psi$ has a global maximum at $(\tau_0,y_0)$ \color{black} and $\psi(\tau_0,y_0)=V (\tau_0,y_0)$. If $V (\tau_0,y_0)=v(\tau_0,y_0)$, since $V \ge v$, it results that $v -\psi$ has \color{black} a global maximum at $(\tau_0,y_0)$ and $v(\tau_0,y_0)=\psi(\tau_0,y_0)$. Using that $v $ is a subsolution we get that \begin{align*} &\partial_t \psi(\tau_0,y_0)+
h(y_0)V(\tau_0,y_0)+\mathcal{L}_s(\Omega(\tau_0,y_0),\psi
(\tau_0,y_0) \color{black})\\ &\quad =\partial_t \psi(\tau_0,y_0)+h(y_0)
v(\tau_0,y_0)+\mathcal{L}_s(\Omega(\tau_0,y_0),\psi(\tau_0,y_0)
\color{black}) \le f(\tau_0,y_0). \end{align*}
Let us now focus on the case $V (\tau_0,y_0)=\phi(\tau_0,y_0)+\delta\neq v(\tau_0,y_0)$. This implies that \[ \partial_t \psi(\tau_0,y_0)=\partial_t \phi(\tau_0,y_0). \]
and that $(\tau_0,y_0)\in B_{r}(t_0,x_0)$. Then we have that \[ \phi+\delta-\psi\le V -\psi\le 0 \ \ \ \mbox{in} \ \ B_{r}(t_0,x_0), \] where we have used the fact that $\phi+\delta\le V $ in $B_{r}(t_0,x_0)$. Moreover, we readily check \color{black} \[ \phi+\delta-\psi\le \phi+\delta-v \le 0 \ \ \ \mbox{in} \ \ (0,T)\times\Omega \setminus B_{r}(t_0,x_0), \] since $v \equiv V \le \psi$ in $ (0,T)\times\Omega \setminus B_{r}(t_0,x_0)$ and thanks to the definition of $\delta_1$. As effect of \color{black} the two inequalities above, we deduce that $\phi+\delta\le \psi$ in $ (0,T)\times\Omega $. It follows that \color{black} \begin{align*}
&\partial_t \psi(\tau_0,y_0)+
h(\tau_0,y_0)V(\tau_0,y_0)+\mathcal{L}_s(\Omega(\tau_0,y_0),\psi
(\tau_0,y_0) \color{black})
\\ &\quad \le\partial_t \phi(\tau_0,y_0)
+h(\tau_0,y_0)(\phi(\tau_0, \color{black} y_0)+\delta)+\mathcal{L}_s(\Omega(\tau_0,y_0),\phi (\tau_0,y_0) \color{black})
\\ &\quad \le f(\tau_0,y_0)-\frac{\epsilon}2+\frac{\epsilon}4<f(\tau_0,y_0), \end{align*} where the last inequality comes from the choice of $r$ and $\delta$. This leads \color{black} to a contradiction since it implies \color{black} that $V\in A$ and that $V>v\ge u$ somewhere in $ B_{r}(t_0,x_0)$. This proves that \color{black} $u_*$ is a supersolution to \eqref{parabolic}.
Finally let us \color{black} prove that $u_*(x,0)\ge u_0(x)$ for all $x\in\Omega$. Again assume by contradiction that there exists $\bar x\in\Omega$ such that \begin{equation}\label{spring} u_*(\bar x,0)< u_0(\bar x). \end{equation} Our aim is to build \color{black} a barrier from \color{black} below for $u(t,x)$ in a neighborhood of $(0,\bar x)$ (hence a barrier for $u_*$, as well), contradicting \color{black} \eqref{spring}. Thanks to the continuity of $u_0$, for any $\epsilon>0$ there exists $\delta_{\epsilon}< \frac12 d(\bar x)$ such that \[
|u_0(\bar x)-u_0(x)|\le\epsilon \ \ \ \mbox{if} \ \ \ |\bar x -x|\le \delta_{\epsilon}. \] Take now a function $\eta(x)\in C_c^{\infty}(B_1(0))$ with $0\le \eta\le 1$ and $\eta(0)=1$, and define \[ \tilde w(t,x) =a\eta\left(\frac{\bar x -x}{\delta_{\epsilon}}\right)-b-K\delta_{\epsilon}^{-2s}t, \]
with $a=u_0(\bar x)-\epsilon+\|u_0\|_{\elle{\infty}}$,
$b=\|u_0\|_{\elle{\infty}}$, and $K>0$ to be chosen below. \color{black} Thanks to the choice of $a$ and \color{black} $ b$ it is easy to check that $\tilde w(0,x)\le u_0(x)$. Moreover, recalling that supp$\left(\eta\left(\frac{\bar x -x}{\delta_{\epsilon}}\right)\right)\subset B_{\delta_{\epsilon}}(\bar x)$ and that the integral operator scales as $\delta^{-2s}$, we get that \[ \partial_t \tilde w+h \tilde w+\mathcal{L}_s(\Omega(t,x),\tilde w
(t,x) \color{black} )-f\le -K\delta_{\epsilon}^{-2s}+\delta^{-2s}C(\eta)+\|f\|_{\elle{\infty}}\le 0, \]
where the last inequality follows by letting \color{black} $K>
C(\eta)+\delta_{\epsilon}^{2s}[aC(d(\bar
x))+\|f\|_{\elle{\infty}}]$. Hence, \color{black} $\tilde w\in A$ and, by
definition of $u(t,x)$, $\tilde w(t,x)\le u(t,x)$.
Now, \color{black} for any $\epsilon>0$, there exists $\tilde{\delta}_{\epsilon}$ (possibly smaller then $\delta_{\epsilon}$) such that \[
u_0(\bar x)-2\epsilon\le \tilde w(t,x) \le u(t,x) \ \ \ \mbox{for} \ \ \ |\bar x -x|\le \tilde{\delta}_{\epsilon} \quad t\in [0,\tilde{\delta}_{\epsilon}). \] Then the same inequality holds for $u_*$, contradicting \color{black} \eqref{spring}. \end{proof}
\begin{proof}[Proof of Theorem \ref{existence}] Let us argue
for $T < \infty$ first. Choose $\eta\le
\min\{\bar{\eta},\eta_1\}$ where $\eta_1$ is from
\eqref{fconintro}-\eqref{u0con} and $\overline \eta $ from Lemma
\ref{barriercone} and Remark \ref{barpar},
and set $\overline l(t,x)= Q d(x)^{\eta}$ where $Q$ is a positive
constant to be chosen later. Whenever a smooth function
$\varphi$ touches $\overline l$ from above at $(t_0,x_0)$ we deduce that \begin{align*} &\partial_t\varphi_r(t_0,x_0)+h(t_0.x_0)\varphi_r(t_0,x_0)+\mathcal{L}_{s}
(\Omega(t,x),\varphi_r(t_0.x_0))- f(t_0,x_0)\\ &\quad \ge\frac{\alpha}{d(x)^{2s}}\varphi_r(t_0,x_0)+\mathcal{L}_{s}
(\Omega(t,x),\varphi_r(t_0,x_0))- |f(t_0,x_0)|\\ &\quad
\ge \left(Q \frac{\alpha}{2}-|f(t_0,x_0)|d(x_0)^{2s-\eta}\right)d(x_0)^{\eta-2s}\ge 0. \end{align*} The first inequality comes from the fact that $\partial_t\varphi_r(t_0,x_0)$ must be zero
and from assumption \eqref{alphap} whereas the second inequality follows by construction of $\overline l$ and by Lemma \ref{barriercone}. The third \color{black} inequality follows from
the assumption on $f$ (see \eqref{fconintro}) and by taking $Q$ large enough. This proves that $\overline{l}(t,x)$ is a supersolution of \eqref{parabolic}. Similarly, we can show that $\underline{l}(t,x)=-\overline{l}(t,x)$ is a subsolution. By possibly taking an even larger value of $Q$ if necessary, we deduce that $\underline l(0,x)\le u_0(x) \le \overline l(0,x)$, thanks to assumption \eqref{fconintro} on $u_0$ and to the choice of $\eta$. At this point, we can apply Theorem \ref{perronpar} and conclude the proof.
The limiting case $T=\infty$ can be tackled by passing to the limit in the the sequence $\{u_n\}$ of solutions of problem \eqref{parabolic} in $(0,n)\times\Omega$. Thanks to Lemma \ref{timecomp} we have that \[ u_n(t,x)\equiv u_m(t,x) \ \ \ \mbox{in} \ \ (0,\min\{n,m\})\times\Omega. \] Then, for any $(t,x)\in(0,\infty)\times\Omega$, we can uniquely define $u(t,x)=u_{[t]+1}(t,x)$, where $[t]$ is the integer part of $t$. From the comparison principle applied on each domain $(0,n)\times\Omega$, this uniquely defines a solution for all times. \end{proof}
As mentioned above, we are not giving the details of the proof of Theorem \ref{teide}. Indeed, the elliptic case of Theorem \ref{teide}
follows again from by Perron method, by means of the barriers from Lemma \ref{barriercone}. Here, one is asked to use an elliptic version of the comparison Lemma \ref{timecomp}, which can be deduced using Corollary \ref{ellipticdif}.
\section{The eigenvalue problem}\label{seceigenvalue} In this section, we focus on \color{black} the eigenvalue problem associated to the operator \color{black} \eqref{10-6}. Before discussing our specific \color{black} notion of eigenvalue, we prepare some technical tools.
\begin{lemma}[Strong maximum principle]\label{strongmax} Assume \color{black}\eqref{alpha}-\eqref{Sigmadef} and let \color{black}
$u\in \text{\rm LSC}_b(\Omega)$ \color{black} solve $$h(x) u(x)+\mathcal{L}(\Omega(x),u(x))\ge 0$$ in the viscosity sense in $\Omega$ and $u\ge0$ in $\partial\Omega$. Then, either $u\equiv0$ or $u>0$ in $\Omega$. \end{lemma} \begin{proof} Notice that, thanks to the comparison principle, we
have that $u\ge0$ in $\Omega$. Let us assume that $u(x_0)=0$ at some $x_0\in\Omega$ and that, by contradiction, $u(y_0)>0$, for some $y_0\in\Omega$.
If $y_0\in \Omega(x_0)$, since $x_0$ is a minimum for $u$, there exists $\varphi\in C^2(\Omega)$ such that $\varphi(x_0)=u(x_0)=0$, $\varphi(x_0)\le u(x_0)$ in $\Omega$. Moreover, since $u\in \text{\rm LSC}_b(\Omega)$\color{black}, we can chose $\varphi$ nonnegative and nontrivial in $\Omega(x_0)$. Since $\varphi$ is an admissible test function for $u$ at point $x_0$ and it follows that \[
\int_{\Omega(x_0)}\frac{-\varphi(x_0+z)}{|z|^{N+2s}}\ge0. \] This is however contradicting the fact that $\varphi\ge0$ is nontrivial in $\Omega(x_0)$ and proves that $u(x_0)=0$ implies $u=0$ in $\Omega(x_0)$.
If $y_0\notin \Omega(x_0)$, thanks to assumption \eqref{Sigmadef} and the fact that $\Sigma$ is open, there exists a finite set of points $\{x_i\}_{i=0}^K\subset\Omega$ such that $x_i\in \Omega(x_{i-1})$ for $i=1,\cdots, K$ and $y_0\in\Omega(x_K)$. Using inductively the previous part we deduce that $u=0$ in each $\Omega(x_i)$, that is $u(y_0)=0$, which is again a contradiction. \end{proof}
The next technical Lemma allows us to restrict the operator to a subdomain of $\Omega$. This requires to modify both the sets $\Omega(x)$ and the function $h$. Thanks to the assumptions, in particular the density bound for $\Sigma$ in \eqref{Sigmadef}, it turns out the the restricted operator satisfies the same properties of the original one\color{black}. \begin{lemma}[Localization]\label{localization} Let $f\in C(\Omega)$ and assume that $v$ solves in a viscosity sense \[ h(x)v(x)\color{black}+\mathcal{L}_s(\Omega(x),v(x)\color{black})\le f(x)\color{black} \ \ \ \mbox{in} \ \ \Omega. \] If the the open set $O\subset\Omega$ is such that $v\le 0$ in $\Omega\setminus O$, then $v$ also solves in the viscosity sense \[ j(x)v(x)\color{black}+\mathcal{L}_s(\Xi(x),v(x)\color{black})\le f(x)\color{black} \ \ \ \mbox{in} \ \ O, \] where $\Xi(x)=\Omega(x)\cap O$ and $j(x)=h(x)+\int_{\Omega(x)\setminus
O}\frac{dy}{|x-y|^{N+2s}}$. By additionally assuming
that $O$ coincides with some ball $\tilde B\subset \Omega$ and by setting $\tilde d(x)=\mbox{dist}(x,\partial \tilde B)$, it holds true that \begin{equation}\label{restrictedh} c_1\tilde d(x)^{-2s}\le j(x)\le c_2\tilde d(x)^{-2s} \ \ \ x\in \tilde B. \end{equation} \end{lemma} \begin{proof} Let us assume that $\max_{O}( v-\varphi)=(v-\varphi)(\bar x)=0$ and that $B_r(\bar x)\subset\subset O$. It \color{black} is possible to extend $\varphi$ to all $\Omega$ so that $\max_{\Omega} ( v-\varphi)=(v-\varphi)(\bar x)=0$ (with a slight abuse of notation, we still indicate \color{black} the extension by \color{black} $\varphi$). Then, we have \begin{align*} &f(\bar x)\ge h(\bar x)v(\bar x)+\int_{B_r(\bar x)\cap \Omega(\bar
x)}\frac{\varphi(\bar x)-\varphi(y)}{|\bar
x-y|^{N+2s}}dy+\int_{\Omega(\bar x)\setminus B_r(\bar
x)}\frac{v(\bar x)-v(y)}{|\bar x-y|^{N+2s}}dy\\ &\quad =\left[h(\bar x)+\int_{\Omega(x)\setminus
O}\frac{dy}{|x-y|^{N+2s}}\right]v(\bar x)+\int_{B_r(\bar x)\cap
\Sigma(\bar x)}\frac{\varphi(\bar x)-\varphi(y)}{|\bar
x-y|^{N+2s}}dy+\int_{\Xi(\bar x)\setminus B_r(\bar
x)}\frac{v(\bar x)-v(y)}{|\bar x-y|^{N+2s}}dy\\ &\quad
-\int_{\Omega(x)\setminus O}\frac{v(y)}{|x-y|^{N+2s}}dy\ge \left[h(\bar x)+\int_{\Omega(x)\setminus O}\frac{dy}{|x-y|^{N+2s}}\right]v(\bar x) +\int_{\Xi(\bar x)}\frac{\varphi_r(\bar x)-\varphi_r(y)}{|\bar x-y|^{N+2s}}dy, \end{align*} where the last \color{black} inequality comes from the fact that \color{black} $v\le 0$ in $\Omega\setminus O$.
Let us consider now the case $O\equiv \tilde B$. The estimate from above in \eqref{restrictedh} can be deduced as in \eqref{easypart}. We omit the \color{black} details. To show the estimate from below, fix $k>1$ so that \[
c- \frac 2{\omega_n}|B_1(0)\cap A(k)|\ge \frac12 c, \] where $c$ is the constant in \eqref{Sigmadef} and $A(k)=\{y\in\mathbb{R}^N \ : \ -k^{-1}\le y_1\le 0\}$. We also point out that the symmetry of $\Sigma$ implies, for $B^{\pm}_r(0)=B_r(0)\cap \{z\le0\}$ and $r>0$, that \begin{equation}\label{auxilia}
|\Sigma\cap B^{\pm}_{r}(0)|\ge \frac c2|B_{r}(0)|.
\end{equation} Moreover, without loss of generality, we assume that $\tilde B=\{|y|\le 1\}$ (this is always true up to a translation and
dialation) and take $x\in \{|y|\le 1\}$ such that $k\tilde d(x)< \zeta d(x)$. Le us take now a system of coordinates with origin in the center of $\tilde B$ such that $|x|=-x_1=1-\tilde d(x)$. It follows that $\tilde{\Omega}(x)\cap B_{k\tilde d(x)}(0)=\Sigma \cap B_{k\tilde d(x)}(0)$ and \begin{align}\label{cip}
\int_{{\Omega}(x)\setminus \tilde B}\frac{dy}{|x-y|^{N+2s}}\ge \int_{{\Omega}(x)\cap B_{k\tilde d(x)}(x) \setminus \{y_1>-1\}}\frac{dy}{|x-y|^{N+2s}}=\int_{{\Sigma}\cap B^-_{k\tilde d(x)}(0) \setminus \{z_1>-\tilde d(x)\}}\frac{dz}{|z|^{N+2s}} \\
\ge\tilde d(x)^{-n-2s}|{\Sigma}\cap B^-_{k\tilde d(x)}(0) \setminus \{z_1>-\tilde d(x)\}|\nonumber. \end{align} We get that \begin{align}\label{ciop}
|{\Sigma}\cap B^-_{k\tilde d(x)}(0) \setminus \{z_1>-d(x)\}|\ge |{\Sigma}\cap B^-_{k\tilde d(x)}(0)|-|\{-d(x)\le z_1<0\}\cap B^-_{k\tilde d(x)}(0)| \\
\ge \frac c2 |B_{k\tilde d(x)}|- (kd(x))^N| B_1(0)\cap A(k) |\ge \frac {\omega_n}{4}c(kd(x))^N,\nonumber \end{align} where we have used \eqref{auxilia} and the definition of $k$ in the last two inequality respectively. Putting together \eqref{cip} and \eqref{ciop} and recalling the condition $k\tilde d(x)< \zeta d(x)$, we deduce that \[
\int_{{\Omega}(x)\setminus \tilde B}\frac{dy}{|x-y|^{N+2s}}\ge c_1 \tilde d(x)^{-2s} \ \ \ \mbox{for all $x\in \tilde B$ with $k\tilde d(x)\le\zeta$dist$(\tilde B, \Omega)$}. \] This, together with the definition of $j(x)$, completes the proof of the Lemma. \color{black} \end{proof}
\begin{lemma}[Refined Maximum Principle]\label{aap} Assume \eqref{alpha} and \eqref{Sigmadef}. Let $\lambda>0$, $0 \leq \color{black} f\in C(\Omega)$, and assume that $u\in \text{\rm LSC}_b(\Omega)$\color{black}, with $u>0$ in $\Omega$ and $u=0$ on $\partial\Omega$, satisfies \[ h(x)u(x) +\mathcal{L}_s(\Omega(x),u(x) ) \ge \lambda u(x) +f(x) . \] Moreover, let $v\in USC_b(\Omega)$\color{black}, with $v\le 0$ on $\partial \Omega$, satisfy
\[
h(x)v(x) \color{black}+\mathcal{L}_s(\Omega(x),v(x) \color{black})\le \lambda v(x) \color{black}. \] If $f$ is non trivial then $v\le0$. If $f\equiv0$ and there exists $x_0\in\Omega$ such that $v(x_0)>0$ then $v=tu$ for some $t>0$. \end{lemma} \begin{proof} Let $z_t=v-tu$ for $t>0$. Then, thanks to Corollary \ref{ellipticdif} we have that $z_t$ satisfies \begin{equation}\label{25.6} h(x)z_t(x)+\mathcal{L}_s(\Omega(x),z_t(x)\color{black})\le \lambda z_t(x). \end{equation} Notice that for all $\rho>0$ and any $t>0$ such that $$t> \frac{\sup_{d(x)>\frac{\rho}{2}}v(x)}{\inf_{d(x)>\frac{\rho}{2}}u(x)}$$ (recall that $u>0$ in $\Omega$) we have that \[ \{x\in\Omega \ : \ d(x)\ge\rho \}\subset \{x\in\Omega \ : \ z_t<0 \}. \] We now use \color{black} Lemma \ref{localization} to restrict \eqref{25.6} to \color{black} $\Omega_\rho=\{x\in\Omega \ : \ d(x)<\rho \}$ and get \begin{equation}\label{25.6bis} j(x)z_t+\mathcal{L}_s(\Xi(x),z_t)\le \lambda z_t \ \ \ \mbox{in} \ \ \ \Omega_\rho, \end{equation} where $j(x)=h(x)+\int_{\tilde{\Omega}(x)\setminus\Omega_\rho
}\frac{dz}{|z|^{N+2s}}$ and $\Xi(x)=\Omega(x)\cup \Omega_\rho$. Taking $\rho$ such that $\rho< \left(\frac{\alpha}{\lambda}\right)^{\frac1{2s}}$ and using the coercivity assumption \eqref{alpha} on $h(x)$, it follows that $j(x)-\lambda>0$. Then, since $z_t\le 0$ on $\partial\Omega_{\rho}$, we can apply the comparison principle to \eqref{25.6bis} and \color{black} deduce that $z_t\le0$ in $\Omega_\rho$. This means that $z_t\le 0$ in $\Omega$.\\
Let us focus on the case $0\not = f \color{black} \ge0$ and assume, by contradiction, that there exists $x_0\in\Omega$ such that $v(x_0)>0$. Then, up to a multiplication with a positive constant, we have that $v(x_0)>u(x_0)$.\\ Let us set \[ \tau=\inf\{ t \ : \ z_t\le0 \ \mbox{in} \ \Omega\} \] and recall that $\tau>1$ since $v(x_0)>u(x_0)$. As $z_{\tau}\le 0$ we get \[ h(x)z_t(x)\color{black}+\mathcal{L}_s(\Omega(x),z_t(x)\color{black})\le \lambda z_t(x)\color{black}\le 0 \quad \forall t \geq \tau. \] We can apply the strong maximum principle of Lemma \ref{strongmax} to prove that either $z_{\tau}\equiv0$ or $z_{\tau}<0$. This latter case is not possible since it would contradict the definition of $\tau$. Having that $z_{\tau}\equiv 0$ we get $v_{\tau} := v=\tau u$. We have \[ h(x)v_{\tau}(x)+\mathcal{L}_s(\Omega(x),v_{\tau}(x))\le \lambda v_{\tau} (x) \] by assumption and, since $v_{\tau}= \tau u$, \[ h(x)v_{\tau}(x)+\mathcal{L}_s(\Omega(x),v_{\tau}(x))\ge \lambda v_{\tau}(x)+ f (x). \] By combining these two inequality, using Corollary \ref{ellipticdif} and recalling that $f$ is nontrivial, we obtain a contradiction. Hence, $v\le0$.
Let us now consider the case $f\equiv0$ and $v(x_0)>0$ for some $x_0\in\Omega$. Upon multiplying by a positive constant, we can assume that $v(x_0)>u(x_0)$. Following exactly the same argument and notation of the previous step we obtain that either $z_{\tau}\equiv0$ or $z_{\tau}<0$. The latter option again \color{black} leads to a contradiction. Hence, $z_{\tau}\equiv0$, which corresponds to the assertion. \color{black} \end{proof}
\begin{teo}\label{approximation} Assume \eqref{alpha}-\eqref{Sigmadef}. Given $\lambda>0$ and a nonzero $0\le f\in C(\Omega)$ satisfying \eqref{fcon}, let us assume that there exists $0\le u\in \text{\rm LSC}_b(\Omega)$ \color{black} such that \[ \begin{cases} h(x)u(x)\color{black}+\mathcal{L}_{s} (\Omega(x),u(x)\color{black})\ge\lambda u(x)\color{black}+ f(x) \qquad & \mbox{in } \Omega,\\
u(x) = 0 & \mbox{on } \partial \Omega. \end{cases} \]
Then, for any $\mu\le \lambda$ and $ |g|\le f$, there exists a solution to \begin{equation}\label{24-6bis} \begin{cases} h(x)v(x)\color{black}+\mathcal{L}_{s} (\Omega(x),v(x)\color{black})= \mu v(x)\color{black}+g(x) \qquad & \mbox{in } \Omega,\\
v(x) = 0 & \mbox{on } \partial \Omega. \end{cases} \end{equation}
If moreover $g$ is nonnegative and nontrivial then $v>0$.\color{black} \end{teo} \begin{proof} Let us set $v_0=0$ and recursively \color{black} define the sequence $\{v_n\}$ of solutions to \[ \begin{cases} h(x)v_n(x)\color{black}+\mathcal{L}_{s} (\Omega(x),v_n(x))=\mu v_{n-1}+ g(x) \qquad & \mbox{in } \Omega,\\
v_n(x) = 0 & \mbox{on } \partial \Omega. \end{cases} \]
Notice that the existence of each $v_n$ is ensured by Theorem \ref{existence}. We now prove that $ |v_n|\le u$ by induction on $n$.
Let $n= 1$. As $|g|\leq f$ the comparison principle from Corollary \ref{ellcomp} ensures that $|v_1|\leq u$.
Assume that $|v_{n-1}| \leq u$. Since $|\mu v_{n-1}+ g(x)|\le \lambda u(x)+ f(x)$, we can use the comparison principle (see Corollary \ref{ellcomp}) to deduce that $|v_{n}|\le u$. In case $g\ge0$ a similar argument shows that $0\le v_{n}\le v_{n+1}$.\\
This implies that $|\mu v_{n-1}+ g(x)|\le \lambda u+f \le C d(x)^{\eta_f-2s}$, where we have used assumption \eqref{fcon} for the last inequality. Using Lemma \ref{barriercone}, we can conclude that there exist a large $Q$ (independent of $n$) such that $\overline{l}(x)=Qd(x)^{\eta}$, with $\eta=\min\{\bar \eta, \eta_f\}$, solves \[ h(x)\overline{l}(x)+\mathcal{L}_{s} (\Omega(x),\overline{l}(x))\ge \mu v_{n-1}+ g(x) \qquad \mbox{in } \Omega. \] Thanks again to the comparison principle, we deduce that $v_n\le Qd(x)^{\eta}$. Similarly, it follows that $v_n\ge- Qd(x)^{\eta}$. Let us consider then the half-relaxed limits of the sequence $v_n$ \[ \overline{v}(x)=\sup\{\limsup_{n\to\infty} v_n(x_n) \ : \ x_n\to x\}, \ \ \ \ \underline{v}(x)=\inf\{\liminf_{n\to\infty} v_n(x_n) \ : \ x_n\to x\}. \] Notice that by construction both $\overline{v}$ and $\underline{v}$ vanish on $\partial\Omega$. Taking advantage of Lemma \ref{stability}, we deduce that $\overline{v}$ and $\underline{v}$ are respectively sub and super-solution to \eqref{24-6bis}. Moreover, Corollary \ref{ellipticdif} implies that $w=\overline{v}-\underline{v}$ solves \[ h(x)w(x)+\mathcal{L}_{s} (\Omega(x),w(x))\le \mu w(x) \ \ \mbox{ in }\ \ \Omega, \ \ \mbox{ and } \ \ w=0 \ \ \mbox{ on } \ \ \partial\Omega.\\ \] Since $u$ satisfies \[ h(x)u(x)+\mathcal{L}_{s} (\Omega(x),u(x))\ge\mu u(x)+ f(x) \qquad \mbox{in } \Omega,\\ \] we may use Lemma \ref{aap} to conclude that $w\le0$, namely $\overline{v}\le\underline{v}$. Due to the natural order between the two functions, we deduce that $v=\overline{v}=\underline{v}$ is a viscosity solution to \eqref{24-6bis}. If $g \geq 0$ and not trivial, by using the strong maximum principle we easily deduce that $v>0$. \color{black} \end{proof}
Let us assume that the nontrivial $0\le f\in C(\Omega)$ satisfies \eqref{fcon} and recall the definition of \color{black} the set \[ E_f=\{\lambda\in\mathbb R \ : \ \exists v\in C(\overline{\Omega}), \ v>0 \mbox{ in } \Omega, \ v=0 \mbox{ on } \partial\Omega, \ \mbox{such that} \ hv+\mathcal{L}_s(\Omega, \color{black} v)= \lambda v+f\}. \] Moreover, let \color{black} \begin{equation}\label{28.06} \lambda_f \color{black} =\sup \ E_f. \end{equation} As we shall see, $\lambda_f$ does not depend on the particular choice of $f$. By definition and thanks to Theorem \ref{approximation} we deduce that \[ \mbox{if} \ \ \ g\le f \ \ \ \mbox{then} \ \ \ \lambda_g\le \lambda_f. \] The following Lemma shows us that $\lambda_f$ is finite and that $E_f$ is a left semiline. \begin{lemma}[]\label{acotado} Assume \eqref{alpha}-\eqref{Sigmadef} and that $0\le f\in C(\Omega)$ is nonzero and satisfies \eqref{fconintro}. Then, $\lambda_f$ is positive and finite and $E_f$ is a left semiline with $E_f \not = \mathbb{R}$. \end{lemma} \begin{proof} Notice that for any $$\displaystyle \lambda\in \left(-\infty,\frac{\alpha}{\mbox{diam}(\Omega)^{2s}}\right)$$ the operator \[ u\mapsto \color{black} [h(x)-\lambda]u +\mathcal{L}(\Omega(x),u) \] fulfills assumptions \eqref{ostationary}-\eqref{alpha}. We can apply the existence results and the strong maximum principle of the previous chapter to deduce that $(-\infty,\frac{\alpha}{\mbox{diam}(\Omega)^{2s}})\subset E_f$.
Moreover, if $\lambda\in E_f$, Theorem \eqref{approximation} assures that any $\mu<\lambda$ belongs to $E_f$ as well. This proves that $E_f$ is
a left semiline.
To show that $E_{f}\neq \mathbb{R}$ let us take $\lambda<\lambda_f $. Since $E_f$ is a left semiline, there exists some $v\in C(\overline \Omega)$ with $v=0$ on $\partial\Omega$ and strictly positive in $\Omega$, such that \color{black} \[ h(x)v(x)+\mathcal{L}_s(\Omega(x), v(x))= \lambda v(x)+f(x) \ \ \ \mbox{in the viscosity sense in } \ \Omega. \]
Now we want to \emph{restrict} this inequality to a ball $B\subset\subset\Omega$ such that $f>0$ in $B$\color{black}. In order to do it, for any $x\in B$, we define $\Xi(x)=\Omega(x)\cap B$. Taking advantage of the positivity $v$, we can apply Lemma \ref{localization} to $-v$ and deduce that \begin{equation}\label{cipcip} j(x)v+{\mathcal{L}}_s(\Xi(x),v) \ge \lambda v+ f(x) \ \ \ \mbox{in the viscosity sense in } \ B, \end{equation} where \[
j(x)=h(x)+\int_{{\Omega}(x)\setminus B}\frac{dy}{|x-y|^{N+2s}}. \]
Thanks to Lemma \ref{localization}, we have \color{black} that $j(x)$ satisfies \eqref{alphap} (by possibly changing the \color{black} constants) and that the family $\{\Xi(x)\}$ satisfies the same kind of assumptions of $\{{\Omega}(x)\}$. Then, for any positive continuous function $g$ \color{black} with compact support in $B$ there exists a unique viscosity solution to \[ \begin{cases} j(x)w(x)\color{black}+\mathcal{L}_s(\Xi(x),w(x)\color{black})= g(x) \qquad & \mbox{in } B,\\
w(x) = 0 & \mbox{on } \partial B. \end{cases} \] Thanks to the strong maximum principle of Lemma \ref{strongmax} and the fact that $g$ has compact support in $B$, it follows that $0<g\le C_0 w$ for some positive constant $C_0$.
If $C_0<\lambda$, we would get that \begin{equation}\label{ciopciop} j(x)w(x)\color{black}+\mathcal{L}_s(\Xi(x),w(x)) \le \lambda w(x)\color{black} \ \ \ \mbox{in the viscosity sense in } \ B. \end{equation} Applying Lemma \ref{aap} to \eqref{cipcip} and \eqref{ciopciop}, it would follow $w\le0$, which is a contradiction.
This proves that $\lambda\le C_0$. Since the constant $C_0$ does not depend on $\lambda$, we finally deduce \[ \lambda_f =\sup \ E_f\le C_0. \] This concludes the proof of the Lemma.
\end{proof}
Let us provide now the proof of the well-posedness of the first-eigenvalue problem. \begin{proof}[Proof of Theorem \ref{mussaka}] We split the
argument into subsequent steps.
\textbf{Step 1:} Let us show at first that for a given $f\in C(\Omega)$ that satisfies \eqref{fconintro} and the additional condition \begin{equation}\label{temp} f(x)\ge \theta>0 \ \ \ \mbox{in} \ \ \ \Omega, \end{equation} problem \eqref{eigenproblem} admits a solution $v_f>0$ with $\lambda_f:=\sup E_f$. Let $\{\lambda_n\}$ be a sequence that converges to ${\lambda_f}$ and consider the associate sequence $\{v_n\}\subset C(\overline{\Omega})$, $v_n>0$ of solutions to \begin{equation}\label{aug} \begin{cases} h(x)v_n(x)+\mathcal{L}_{s} (\Omega(x),v_n(x))=\lambda_n v_n(x)+ f(x) \qquad & \mbox{in } \Omega,\\
v_n(x) = 0 & \mbox{on } \partial \Omega. \end{cases}
\end{equation} We first claim that $\|v_n\|_{\elle{\infty}}\to\infty$. Indeed, let us assume by contradiction that there exists $k>0$ such that $ |v_n| \le k$. Thanks to Lemma \ref{barriercone}, we deduce that there exists $Q=Q(k)$ such that the function $\overline{l}(x)=Qd(x)^{\eta}$, with $\eta=\min\{\bar \eta, \eta_f\}$, solves in the viscosity sense \[ h(x)\overline{l}(x)+\mathcal{L}_{s} (\Omega(x),\overline{l}(x))\ge \lambda_n v_n(x)+ f(x) \quad \mbox{in } \Omega\,\quad \forall \ n>0 . \] Thanks to Corollary \ref{ellcomp} and the sign of $v_n$, we have that \begin{equation}\label{8mar} 0<v_n(x)\le Q d^{\eta}(x), \end{equation} where the right-hand side does not depend on $n$. Using Lemma \ref{stability}, we deduce that $\underline{v}(x)=\inf\{\liminf_{n\to\infty} v_n(x_n) \ : \ x_n\to x\}$ is a supersolution to \color{black}
\begin{equation}\label{29-11} \begin{cases} h(x)v(x)+\mathcal{L}_{s} (\Omega(x),v(x))= \lambda_f \color{black} v(x)+ f(x) \qquad & \mbox{in } \Omega,\\
v(x) = 0 & \mbox{on } \partial \Omega. \end{cases} \end{equation}
Since $v_n> 0$ also $\underline{v}\ge0$ in $\Omega$ and Lemma \ref{strongmax} assures that $\underline{v}>0$ in $\Omega$. Then, Theorem \ref{approximation} provides a solution $v_{\infty}>0$ to \eqref{29-11}. \color{black} Taking $\epsilon>0$ such that $f\ge \frac12f+\epsilon v_{\infty}$, which is possible thanks to \eqref{temp}, it follows that $\tilde{v}_{\infty}=2v_{\infty}$ satisfies \[ h(x)\tilde{v}_{\infty}(x)+\mathcal{L}_{s} (\Omega(x),\tilde{v}_{\infty}(x))\ge(\lambda_f+\epsilon) \tilde{v}_{\infty}(x)+ f(x) \qquad \mbox{in } \Omega. \]
Taking again advantage of Theorem \ref{approximation} we reach a contradiction with respect to the definition of $\lambda_f$. \color{black}
We have then proved that
$\|v_n\|_{\elle{\infty}}\to\infty$. Setting
$u_n=v_n\|v_n\|_{\elle{\infty}}^{-1}$ we obtain that \[ \begin{cases} h(x)u_n(x)+\mathcal{L}_{s} (\Omega(x),u_n(x))=\lambda_n u_n(x)+
\frac{ f(x) }{\|v_n\|_{\elle{\infty}}} \qquad & \mbox{in } \Omega,\\
u_n(x) = 0 & \mbox{on } \partial \Omega. \end{cases} \]
Since $0<u_n\le 1$, we deduce as in \eqref{8mar} that $u_n\le Qd^{\eta}(x) $ and then \[ \overline{u}(x)=\sup\{\limsup_{n\to\infty} u_n(x_n) \ : \ x_n\to x\}, \ \ \ \ \underline{u}(x)=\inf\{\liminf_{n\to\infty} u_n(x_n) \ : \ x_n\to x\}, \] are well defined and vanish on the boundary. Lemma \ref{stability} assures us that they are sub and super solution to \begin{equation}\label{paris} \begin{cases} h(x)u(x)+\mathcal{L}_{s} (\Omega(x),u(x))= \lambda_f \color{black} u(x)\qquad & \mbox{in } \Omega,\\
u_{\infty}(x) = 0 & \mbox{on } \partial \Omega. \end{cases} \end{equation}
By definition of $u_n$, there exists a sequence of points $\{x_n\}\subset\Omega$ such that $u_n(x_n)=1$. Thanks to the uniform bound $u_n\le Qd^{\eta}(x) $, we deduce that there exists $\Omega'\subset\subset \Omega$ and that $\{x_n\}\subset\Omega'$. Using Lemma \ref{regelliptic}, it follows that \[
\|u_n\|_{C^{\gamma}(\Omega')}\le \tilde C(C,s,\zeta,d(\Omega'', \Omega'), Q\|d\|_{\elle{\infty}}^{\eta}), \] with $\Omega'\subset\subset \Omega''\subset\subset \Omega$. Then, up to a not relabeled subsequence, $u_n$ uniformly converges to a continuous function $u\in C(\overline{\Omega'})$. Furthermore, $u\equiv \overline{u}\equiv \underline{u}$ in $\Omega'$ and $u_n(x_n)\to u(\bar x)=1$ for some $\bar x\in \Omega'$. This entails on the one hand that
$\underline{u}$ is a non negative and nontrivial supersolution to \eqref{paris}, so that Lemma \ref{strongmax} implies $\underline{u}>0$. On the other hand, we obtain that there exists $\bar x\in \Omega'$ such that $\overline{u}(\bar x)>0$.
By applying again Lemma \ref{aap} directly to $\overline u$ and $\underline u$, we conclude that there exist $t>0$ such that $\overline u= t \underline u$. This implies that both functions are continuous. Moreover, since $t>0$ we deduce that both $\underline u$ and $\overline u$ are at the same time sup- and super-solutions. Hence, $\underline u$ and $\overline u$ are eigenfunctions related to $\lambda_f$. \color{black}
Now we want to get rid of assumption \eqref{temp}. Notice that we used it to show that $\|v_n\|_{\elle{\infty}}$ must diverge. Then, we have to prove that $\|v_n\|_{\elle{\infty}}\to\infty$, assuming that the nontrivial positive continuous function $f$ solely satisfies \eqref{fcon}. Again, let us argue by contradiction and suppose that $\|v_n\|_{\elle{\infty}}\le k$. This would lead again to the existence of a non trivial $v_{\infty}$ solution to \eqref{29-11}. Setting $g=\sup\{f,\theta\}$, the previous argument provides us with $\lambda_g>0$ and $v_g>0$ solution to \eqref{eigenproblem}. Since $f\le g$, by construction we deduce that $\lambda_f\le \lambda_g$. Assume that $\lambda_f< \lambda_g$ and take $\mu\in (\lambda_f,\lambda_g)$. Then, thanks to the definition of $\lambda_g$, the fact that $f\le g$ and by using Theorem \ref{approximation}, it results that the following problem \[ \begin{cases} h(x)z(x)+\mathcal{L}_{s} (\Omega(x),z(x))=\mu z(x) + f(x) \qquad & \mbox{in } \Omega,\\
z(x) = 0 & \mbox{on } \partial \Omega, \end{cases} \] admits a positive solution. This however contradicts the fact that $\lambda_f$ is a supremum. On the other hand, if $\lambda_f= \lambda_g$, Lemma \ref{aap}, applied to $v_{\infty}$ and $v_g$, would imply $v_g\le0$, which is again a contradiction.
\textbf{Step 2:} Let us assume that there exists another couple $(\mu, w)\in \mathbb (0,\infty)\times C(\overline \Omega)$ that solves \eqref{eigenproblem} with $w>0$. If $\mu< \lambda_f$, we deduce that there exists $\lambda\in (\mu,\lambda_f)$ and, by definition of $\lambda_f$ and Lemma \ref{acotado}, a function $u\in C(\overline{\Omega})$, with $u>0$ in $\Omega$, solving \[ \begin{cases} {h}(x)u(x)+{\mathcal{L}}_s (\Omega(x),u(x))=\lambda u(x)+ f(x) \qquad & \mbox{in } \Omega,\\
u(x) = 0 & \mbox{on } \partial \Omega. \end{cases} \] On the other hand, since $\mu<\lambda$ and $w>0$ we have that \[ \begin{cases} {h}(x)w(x)+{\mathcal{L}}_s (\Omega(x),w(x))\le\lambda w(x) \qquad & \mbox{in } \Omega,\\
w(x) = 0 & \mbox{on } \partial \Omega. \end{cases} \] Being in the same setting of Lemma \ref{aap}, we deduce that $w\le0$,
which is a contradiction.
This proves that $ \mu\ge\lambda_f$. Assume by contradiction, that $\mu> \lambda_f$. Take $\epsilon>0$ small enough in order to have $\lambda_f<\mu-\epsilon$ and a nonnegative nontrivial continuous function $g(x)$ such that $\epsilon w \ge g$ in $\Omega$. It follows that $w$ solves \[ {h}(x)w(x)+{\mathcal{L}}_s (\Omega(x),w(x))\ge(\mu-\epsilon) w(x) + g(x) \qquad \mbox{in } \Omega. \] Using Theorem \ref{approximation} we deduce that there exists $w_g$ solution to \[ \begin{cases} {h}(x)w_g(x)+{\mathcal{L}}_s (\Omega(x), w_g(x))=(\mu-\epsilon) w_g(x)+ g(x) \qquad & \mbox{in } \Omega,\\
w_g(x) = 0 & \mbox{on } \partial \Omega. \end{cases} \] Applying the refined comparison principle of Lemma \ref{aap} between $v_f$ and $w_g$ it follows that $v_f\le 0$, which is a contradiction. We eventually conclude that $\lambda_f=\mu$. This also implies that $\lambda_f=\lambda_g=\overline{\lambda}$ for all $f,g$ that satisfy \eqref{fconintro}.
\textbf{Step 3.} In this last step we show that solutions of \eqref{eigenproblem} are unique up to a multiplicative constant. Assume that $w$ is a nontrivial solution to \eqref{eigenproblem} and let $v_f>0$ be the solution provided by Step 1. Since $w$ is nontrivial we can always assume, up to a multiplication with a (not necessarily positive) constant, that $w(x_0)>0$. Then, we can use the second part of Lemma \ref{aap} to conclude that $w=t v_f$ for some constant $t$. \end{proof}
\begin{proof}[Proof of Theorem \ref{helmholtz}] Since $\lambda< \overline{\lambda}$, thanks to the assumptions on $f$ and using characterization \eqref{28.06}, we deduce that there exists a function $v>0$ in $\Omega$ solving \[ \begin{cases}
{h}(x)v(x)+{\mathcal{L}}_s (\Omega(x),v(x))=\lambda v(x)+ |f(x)| \qquad & \mbox{in } \Omega,\\
v(x) = 0 & \mbox{on } \partial \Omega. \end{cases} \] Clearly, $v$ is a supersolution for \eqref{helmeq} and we can take advantage of Theorem \ref{approximation} to conclude that \eqref{helmeq} admits a solution. To deal with uniqueness we assume that \eqref{helmeq} has two solutions $v$ and $z$ and set $w=v-z$. Using Corollary \ref{ellipticdif} it follows that $w$ solves \[ \begin{cases} {h}(x)w(x)+{\mathcal{L}}_s (\Omega(x),w(x))=\lambda w(x) \qquad & \mbox{in } \Omega,\\
w(x) = 0 & \mbox{on } \partial \Omega. \end{cases} \] Applying Lemma \ref{aap} to $v$ and $w$, we conclude that $w\le 0$. The same conclusion holds for $ -w= z-v$, so that $w=0$ and $v\equiv z$. \end{proof}
\section{Asymptotic analysis}\label{Asymptotic}
In this last section, we address the large time behavior of the solution to \eqref{parabolic}.
\begin{teo}\label{totoinfty} Let us assume \eqref{alpha}-\eqref{Sigmadef}, that $ g_0 \in C(\Omega)$ with supp$( g_0
)\subset\subset\Omega$, and that there exist constants $\eta_g,C_g>0$ such that the continuous nonnegative function $g:(0,\infty)\times\Omega\to\mathbb R^{+}$ satisfies \begin{equation}\label{gcon} g(t,x)d(x)^{2s-\eta_g}e^{\lambda t}\le C_g \ \ \ \mbox{for some} \ \ \ \lambda<\overline{\lambda}, \end{equation} where $\overline{\lambda}$ is the first eigenvalue provided by Theorem \emph{\ref{mussaka}}. Then, if $w\in C([0,\infty)\times\overline{\Omega})\cap L^{\infty}((0,\infty)\times\Omega)$ satisfies in the viscosity sense \[ -g(x,t)\le \partial_tw(t,x)+h(x)w(t,x)+\mathcal{L}_{s}(\Omega(x),w(t,x)) \le g(x,t) \ \ \ \mbox{in} \ (0,\infty)\times\Omega, \] coupled with boundary and initial conditions \[ \begin{cases}
w(t,x) = 0 & \mbox{on } (0,T)\times\partial\Omega,\\
w(0,x) = g_0(x) & \mbox{on } \Omega, \end{cases} \]
one has $|w(x,t)|\le Q_{\lambda}d(x)^{\tilde
\eta}e^{-\lambda t}$, for all $\tilde \eta\le \min\{\bar \eta, \eta^g\}$ and some $Q_{\lambda} >0$ with $Q_\lambda \to \infty$ as $\lambda\to \overline \lambda$ . \end{teo} \begin{proof} Let us consider $\varphi_{\lambda}$ solving \begin{equation}\label{barrierup} \begin{cases} h(x)\varphi_{\lambda}(x)+\mathcal{L}_{s} (\Omega(x),\varphi_{\lambda}(x))= \lambda \varphi_{\lambda}(x) +C_g d(x)^{\eta_g-2s} \qquad & \mbox{in } \Omega,\\
\varphi_{\lambda}(x) = 0 & \mbox{on } \partial\Omega,\\
\varphi_{\lambda}(x) > 0 & \mbox{in } \Omega.\\ \end{cases} \end{equation} Such a function exists since $\lambda<\overline{\lambda}$ and thanks to the characterization of Theorem \ref{mussaka}. We want to prove that $\overline w(t,x)=e^{-\lambda t}\varphi_{\lambda}(x)$ solves in the viscosity sense \[ \partial_t\overline w(t,x)+h(x)\overline w(t,x)+\mathcal{L}_{s}(\Omega(x),\overline w(t,x)) \ge g(x,t) \qquad \mbox{in } (0,\infty)\times\Omega. \]
In order to achieve this, let $\phi\in C^2(\Omega\times(0,\infty))$ and $(t,x)\in(0,\infty)\times\Omega$ such that $\overline w(t,x)=\phi(t,x)$ and that $\overline w(\tau,y)\ge\phi(\tau,y)$ for $(\tau,y)\in(0,\infty)\times\Omega$. We need to check (see Lemma \eqref{def2}) that for any $B_r(x)\subset \Omega$ \[
\partial_t \phi_r( t , x )+ h( x ) \phi_r( t , x )+\mathcal{L}_{s}(\Omega(x), \phi_r(t,x))\ge g(x,t) . \] By construction of $\phi_r$ we have that $\partial_t \phi_r( t , x )\ge -\lambda e^{-\lambda t}\varphi_{\lambda}(x)$. Moreover, the function $\psi^t(y)=\phi e^{\lambda t}$ touches $\varphi_{\lambda}$ at $x$ from below. Then, we infer that \begin{align*} & h( x ) \phi_r( t , x )+\mathcal{L}_{s}(\Omega(x),
\phi_r(t,x))=e^{-\lambda t}\left[ h( x )\psi^t_r( x
)+\mathcal{L}_{s}(\Omega(x), \psi^t_r( x )) \right]\\ &\quad \ge e^{-\lambda t} [ \lambda \varphi_{\lambda}(x) +C_g d(x)^{\eta_g-2s}], \end{align*} where the last inequality follows from the definition of $\varphi_{\lambda}$. By collecting the information obtained we get that \begin{align*} \quad& \partial_t \phi_r( t , x )+ h( x ) \phi_r( t , x
)+\mathcal{L}_{s}(\Omega(x), \phi_r(t,x))- g(x,t)\\ &\quad \ge -\lambda e^{-\lambda t}\varphi_{\lambda}(x) + e^{-\lambda t} [
\lambda \varphi_{\lambda} +C_g d(x)^{\eta_g-2s}] - g(x,t)\\ &\quad=e^{-\lambda t}d(x)^{\eta_g-2s} [C_g -g(x,t)e^{\lambda t}d(x)^{2s-\eta_g}]\ge0, \end{align*} where the last inequality comes from assumption \eqref{gcon}. Similarly, we can prove that $\underline w(t,x)=-e^{-\lambda t}\varphi_{\lambda}(x)$ solves \[ g(x,t) \le \partial_t\overline w(t,x)+h(t,x)\overline w(t,x)+\mathcal{L}_{s}(\Omega(t,x),\overline w(t,x)) \qquad \mbox{in } (0,\infty)\times\Omega. \]
Since $g_0$ has a compact support we can assume
$| g_0 |\le\varphi_{\lambda}$, for otherwise we can consider $k\varphi_{\lambda}$ instead of $\varphi_{\lambda}$ for large $k>0$. Therefore, we can use the comparison principle to deduce that \[
|w(t,x)|\le e^{-\lambda t}\varphi_{\lambda}(x). \] At this point, notice that the right-hand side of the first equation in \eqref{barrierup} can be estimated as follows \[
\lambda \varphi_{\lambda}(x) +C_g d(x)^{\eta_g-2s}\le \lambda \|\varphi_{\lambda}\|_{\elle{\infty}}+C_g d(x)^{\eta_g-2s}\le C_{\lambda,g }d(x)^{\eta_g-2s}. \] This implies that there exists $Q=Q(C_{\lambda,g })$ large enough so that $\overline l= Qd(x)^{\tilde \eta}$ is a super solution to \eqref{barrierup} (see Lemma \ref{barriercone}). Then, we can use the comparison principle to conclude that $\varphi_{\lambda}(x)\le \overline l$, which concludes the proof. \end{proof}
We conclude by presenting a proof of Theorem \ref{lalaguna}. \begin{proof}[Proof of Theorem \ref{lalaguna}] Using Lemma \ref{differenza}, it follows that $w(t,x)=u(t,x)-v(x)$ solves in the viscosity sense \begin{equation}\label{4-6}
\partial_tw(t,x)+h(t,x)w(t,x)+\mathcal{L}_{s} (\Omega(x),w(t,x))\le \tilde{f}(x,t) \qquad \mbox{in } (0,\infty)\times\Omega \end{equation} where \[
\tilde{f}(x,t)=|f(x,t)-f({x} )|+M|h({t} ,{x} )-h({x} )|+2M\int_{\mathbb{R}^N}\frac{|\chi_{\tilde{\Omega}({t} ,{x} )}-\chi_{\tilde{\Omega}({x} )}|}{|z|^{N+2s}}dz. \] Similarly, we can apply again Lemma \ref{differenza} to $-w(t,x)=v(x)-u(t,x)$ to deduce that \begin{equation}\label{4-6bis}
\partial_tw+h(t,x)w+\mathcal{L}_{s} (\Omega(x),w)\ge -\tilde{f}(x,t) \qquad \mbox{in } (0,\infty)\times\Omega. \end{equation} Notice now that \[
\int_{\mathbb{R}^N}\frac{|\chi_{\tilde{\Omega}({t} ,{x} )}-\chi_{\tilde{\Omega}({x} )}|}{|z|^{N+2s}}dz=\int_{|z|\ge\frac{\zeta}{2}d(\bar x)}\frac{|\chi_{\tilde{\Omega}({t} ,{x} )}-\chi_{\tilde{\Omega}({x} )}|}{|z|^{N+2s}}dz \] \[
\le \frac{C}{d(x)^{N+2s}}|\Omega(t,x)\Delta\Omega(x)|\le \frac{Ce^{-\lambda t}}{d(x)^{2s-\eta_1}}, \] where we have used assumptions \eqref{ostationary} and \eqref{omegax} to deduce the equation in the first line, and assumption \eqref{decayomega} to deduce the last inequality in the second line. Thanks to inequalities \eqref{4-6} and \eqref{4-6bis}, the assertion follows by a direct application of Theorem \ref{totoinfty}. \end{proof}
\end{document}
|
arXiv
|
\begin{document}
\title{Quantum simulation of Abelian Wu-Yang monopoles in spin-1/2 systems} \date{\today}
\author{Ze-Lin Zhang} \affiliation{Department of Physics, Fuzhou University, Fuzhou 350002, P. R. China} \affiliation{Fujian Key Laboratory of Quantum Information and Quantum Optics, Fuzhou University, Fuzhou 350002, P. R. China} \author{Ming-Feng Chen} \affiliation{Department of Physics, Fuzhou University, Fuzhou 350002, P. R. China} \affiliation{Fujian Key Laboratory of Quantum Information and Quantum Optics, Fuzhou University, Fuzhou 350002, P. R. China} \author{Huai-Zhi Wu} \affiliation{Department of Physics, Fuzhou University, Fuzhou 350002, P. R. China} \affiliation{Fujian Key Laboratory of Quantum Information and Quantum Optics, Fuzhou University, Fuzhou 350002, P. R. China} \author{Zhen-Biao Yang} \email{E-mail address: [email protected]. The author to whom any correspondence should be addressed.} \affiliation{Department of Physics, Fuzhou University, Fuzhou 350002, P. R. China} \affiliation{Fujian Key Laboratory of Quantum Information and Quantum Optics, Fuzhou University, Fuzhou 350002, P. R. China}
\begin{abstract}
With the help of the Berry curvature and the first Chern number $($$\textit{C}_1$$)$, we both analytically and numerically investigate and thus simulate artificial magnetic monopoles formed in parameter space of the Hamiltonian of a driven superconducting qubit. The topological structure of a spin-1/2 system $($qubit$)$ can be captured by the distribution of Berry curvature, which describes the geometry of eigenstates of the Hamiltonian. Degenerate points in parameter space act as sources $($$\textit{C}_1$ = $1$, represented by quantum ground state manifold$)$ or sinks $($$\textit{C}_1$ = $-1$, represented by quantum excited state manifold$)$ of the magnetic field. We note that the strength of the magnetic field $($described by Berry curvature$)$ has an apparent impact on the quantum states during the process of topological transition. It exhibits an unusual property that the transition of the quantum states is asymmetric when the degenerate point passes from outside to inside and again outside the manifold spanned by system parameters. Our results also pave the way to explore intriguing properties of Abelian Wu-Yang monopoles in other spin-1/2 systems. \end{abstract}
\pacs{}
\maketitle \section{Introduction} \label{sec: Introduction}
In nature, magnetic poles always come in twos, a north and a south. Yet their electrostatic cousins, positive and negative charges, exist independently. In 1931, Dirac developed a theory of monopoles consistent with both quantum mechanics and the gauge invariance of the electromagnetic field~\cite{PAMD-1931}. The existence of a single Dirac monopole would not only address this seeming imbalance which appears in the Maxwell's equations, but would also explain the quantization of electric charge~\cite{PAMD-1931,PAMD-1948}. Up to now, magnetic monopole analogues have been created in many different ways, such as superfluid $^3$He~\cite{SB-1976,MS-1987}, exotic spin ice~\cite{CMS-2008,BGCAPF-2009,LRPCB-2010} and spinor Bose-Einstein condensates ~\cite{Machida-1998,H-1998,PM-2009,RPM-2011,RRKMH-2014,RRTMH-2015,TRMHM-2016,MP-2013}. Methods in Refs.~\cite{RRKMH-2014,RRTMH-2015,TRMHM-2016} could be regarded as excellent examples of quantum simulation of magnetic monopoles. Quantum simulation was originally conceived by Feynman in 1982~\cite{RPF-1982}, which permits the study of quantum systems that are difficult to study in laboratory. For this reason, simulators are especially aimed at providing insight about the behavior of more inaccessible systems appearing in nature. By introducing the point-like topological defects accompanied with a vortex filament into the spin texture of a dilute Bose-Einstein condensate, researchers provided an ideal analogue to Dirac monopole~\cite{PM-2009}.
The topological properties of quantum systems play an extraordinary role in our understanding of the fundamental significance of natural phenomena. For example, the first Chern number $($$\textit{C}_1$$)$~\cite{SSC-1946}, which is a kind of robust topological invariants staying the same by small perturbations to the system can be used to help categorize physical phenomena. It is closely related to Berry phase that arises in cyclic adiabatic evolution of a system in addition to the dynamical counterpart~\cite{MVB-1984}. The point-like topological defects as with degeneracy points in Hamiltonian parameter space of a spin-1/2 system could be viewed as the physical counterpart of topological invariant, which can be described by the first Chern number~\cite{DCAJ-2004}. $\textit{C}_1$ can be extracted by integrating Berry curvature over the closed surface. Gritsev \textit{et al.}~\cite{VGAP-2012} proposed an effective method to measure the Berry curvature directly via the nonadiabatic response on physical observables to the rate of change of an external parameter. The method provides a powerful and generalized approach to explore topological properties in arbitrary quantum systems where the Hamiltonian can be written in terms of a set of externally controlled parameters. Taken into account this method, some researchers measured the topological transition $\textit{C}_1=1\rightarrow0$ in a single superconducting qubit~\cite{SKKSGVPPL-2014}, and others observed the topological transitions in interacting quantum circuits~\cite{PR-2014}. Experimental schemes have also been proposed to simulate the dynamical quantum Hall effect in a Heisenberg spin chain with interacting superconducting qubits~\cite{YZXYZ-2015}, and to realize several-spin one-dimensional Heisenberg chains using nuclear magnetic resonance $($NMR$)$ simulators~\cite{LLLNLPD-2016}.
In this paper, we study the Wu-Yang monopoles \cite{YCN-1975}, which remove out the ``Dirac string'' by gauge transformation in parameter space of the Hamiltonian of a driven superconducting qubit for both geometry $($Berry curvature$)$ and topology $($the first Chern number, $\textit{C}_1$$)$. The topological structure of the qubit can be captured by the distribution of Berry curvature, which describes the geometry of eigenstates of the Hamiltonian. We note that degenerate points in parameter space of the Hamiltonian act as the sources $($sinks$)$ of $\textit{C}_1$ and are analogues to magnetic monopoles $\textit{g}_{\textit{N}(\textit{S})}$ $($$C_1$ = $1$ $\leftrightarrow$ $\textit{g}_{\textit{N}}$, $C_1$ = $-1$ $\leftrightarrow$ $\textit{g}_{\textit{S}}$$)$. We also note that the transition of quantum states is asymmetric during the process when the degeneracy passes from outside to inside and again outside the manifold spanned by system parameters, and the Berry curvature and the fidelity of quantum states have some interesting correlations during the process of topological transition. We give a preliminary explanation to it by introducing the notion of magnetic charges. This general method also can be simulated by other spin-1/2 systems. For example, it can be extended to that in an NMR system and is possible to experimentally investigate more intriguing properties of multi-monopoles, which could be used to construct new kinds of devices based on synthetic magnetic fields.
The configuration of this paper proceeds as follows. In Sec.~\ref{sec: Geometry and topology in the specific state manifold}, we introduce the quantum geometric metric tensor and show its relation to the Berry curvature. In Sec.~\ref{sec: The Chern-Gauss-Bonnet theorem}, we describe how the first Chern numbers are obtained from Berry curvatures. In Sec.~\ref{sec: Measuring the Berry curvature}, we outline an effective method to measure the Berry curvature directly via the nonadiabatic response on physical observables to the rate of change of an external parameter. In Sec.~\ref{sec: The Chern-Gauss-Bonnet theorem}, we describe how the first Chern numbers are obtained from Berry curvatures. As a useful example, we introduce a physical model for the simulation of the Abelian Wu-Yang monopoles by a driven superconducting qubit in Sec.~\ref{sec: From Dirac monopole to Wu-Yang monopole}. In Sec.~\ref{sec: Physical Model for Implementation} we explain how Wu-Yang monopoles are differ from the Dirac monopoles through two kinds of quantum state manifolds. Finally, in Sec.~\ref{sec: Results}, we discuss some interesting correlations between the Berry curvature and the quantum states during the process of topological transition, we then describe the experimental feasibility of this theoretical method.
\section{Geometry and topology in the specific state manifold} \label{sec: Geometry and topology in the specific state manifold}
Consider a family of parameter-dependent Hamiltonian ${\vec{\lambda}}$ for a quantum system and require $\vec{\lambda}$ to depend smoothly on a set of parameters $\vec{\lambda} = (\lambda^{\mathbb{1}}, \lambda^{\mathbb{2}}, \cdots)\in\mathcal{M}$ $(\mathcal{M}$ denotes the Hamiltonian parameters base manifold$)$ and act over the Hilbert space. The outline font $\mathbb{1}$ and $\mathbb{2}$ indicate different indices. The distance between the two neighbouring specific $($say, ground$)$ state wave functions $|\psi_{0}(\vec{\lambda})\rangle$ and $|\psi_{0}(\vec{\lambda} + \textit{d}\vec{\lambda})\rangle$ over $\mathcal{M}$ is~\cite{JPGV-1980,MCFL-2010,MKVGAP-2013}
\begin{eqnarray}\label{distance}
\textit{ds}^2 = 1 - |\langle \psi_{0}(\vec{\lambda})|\psi_{0}(\vec{\lambda} + \textit{d}\vec{\lambda})\rangle|^2 = \underset{\mu\nu}{\sum}\textit{g}_{\mu\nu}\textit{d}\lambda^{\mu}\textit{d}\lambda^{\nu},~~~
\end{eqnarray}
where the quantum $($Fubini-Study$)$ metric tensor $\textit{g}_{\mu\nu}$ associated with the ground state manifold is the symmetric real part of the quantum geometric tensor $\textit{Q}_{\mu\nu}$:
\begin{equation}\label{quantum geometric tensor}
\textit{Q}_{\mu\nu} = \langle\partial_{\mu}\psi_{0}|\partial_{\nu}\psi_{0}\rangle-\langle\partial_{\mu}\psi_{0}|\psi_{0}\rangle\langle \psi_{0}|\partial_{\nu}\psi_{0}\rangle,
\end{equation}
\begin{equation}\label{Fubini-Study metric tensor}
\textit{g}_{\mu\nu} = \mathrm{Re}[\textit{Q}_{\mu\nu}] = (\textit{Q}_{\mu\nu}+\textit{Q}^\ast_{\mu\nu})/{2},
\end{equation}
with $\partial_{\mu(\nu)}\equiv{\partial}/{\partial\lambda^{\mu(\nu)}}$. The Hermitian metric tensor $\textit{Q}_{\mu\nu}$ remains unchanged under arbitrary $\lambda$-dependent $\textit{U}(1)$ local gauge transformation of $|\psi_{0}(\vec{\lambda})\rangle$. In another pioneering work~\cite{MVB-1984}, Berry introduced the concept of the geometric phase and the related geometric curvature $($also called Berry phase and Berry curvature$)$. The Abelian Berry curvature $\textit{F}_{\mu\nu}$ is given by the antisymmetric imaginary part of $\textit{Q}_{\mu\nu}$:
\begin{eqnarray}\label{Berry curvature}
\textit{F}_{\mu\nu} = -2\mathrm{Im}[\textit{Q}_{\mu\nu}] = \textit{i}(\textit{Q}_{\mu\nu}-\textit{Q}^\ast_{\mu\nu}) = \partial_{\mu}\textit{A}_{\nu}-\partial_{\nu}\textit{A}_{\mu},~~~~~
\end{eqnarray}
where $\textit{A}_{\mu(\nu)}=\textit{i}\langle \psi_{0}(\vec{\lambda})|\partial_{\mu(\nu)}|\psi_{0}(\vec{\lambda})\rangle$ is just the Berry connection.
\subsection{The Chern-Gauss-Bonnet theorem}
\label{sec: The Chern-Gauss-Bonnet theorem}
Let $\mathcal{M}^{m}$ be a compact oriented Riemann manifold of even dimension $($$\textit{m} = 2\textit{n}$$)$ and define on $\mathcal{M}^{m}$ a global \textit{m} form, the Chern-Gauss-Bonnet $($\textit{C-G-B}$)$ formula says that
\begin{equation}\label{Chern-Gauss-Bonnet}
\int_{\mathcal{M}^{m}} \textit{e}(\Omega) = \chi(\mathcal{M}),
\end{equation}
where $\textit{e}(\Omega)$ is the Euler class, $\chi(\mathcal{M})\equiv 2(1-\mathfrak{g})$ is the integer Euler characteristic describing the topology of the smooth manifold $\mathcal{M}$ and $\mathfrak{g}$ is the genus that also can be considered as the number of holes of the manifold. As shown in Fig.~\ref{fig:figure1}, two simplest closed manifolds are taken for example. In the lower dimensional version, the $\textit{C-G-B}$ theorem reduces to the Gauss-Bonnet $($\textit{G-B}$)$ theorem. The Fubini-Study tensor $\textit{g}_{\mu\nu}$ defines a Riemannian manifold related to the ground state. Especially , the structure of the Riemannian manifold provides a different topological integer, given by using the $\textit{G-B}$ theorem to the metric tensor in quantum version~\cite{PP-2006}:
\begin{eqnarray}\label{Gauss-Bonnet}
\frac{1}{2\pi}\bigg(\iint_{\mathcal{M}}\textit{K}~\textit{dS} + \oint_{\partial{\mathcal{M}}}\kappa_{\textit{g}}~\textit{dl}\bigg) = \chi(\mathcal{M}),
\end{eqnarray}
where $\textit{K}$ $($Gauss curvature$)$, $\textit{dS}$ $($area element$)$, $\kappa_{\textit{g}}$ $($geodesic curvature$)$, and $\textit{dl}$ $($line element$)$ are geometric invariants, meaning that they remain unchanged under any change of variables. The left side of Eq.~$($\ref{Gauss-Bonnet}$)$ are the bulk $($$\mathcal{M}$$)$ and boundary $($$\partial{\mathcal{M}}$$)$ contributions to $\chi(\mathcal{M})$ of the Riemannian manifold. If the manifold $\mathcal{M}$ is compact and without boundary $($closed$)$, then the boundary Euler integrals vanish, as we prove in detail in Appendix~{I} of the Supplementary data. In this paper we will focus only on the two-dimensional $($$\textit{m}$ = $2$ in Eq.~$($\ref{Chern-Gauss-Bonnet}$)$$)$ version and the dimensionality here is that of parameter space $($i.e., ${\mathcal{S}}^2$$)$ which is composed by the polar angle $\theta$ and the azimuthal angle $\phi$ of a magnetic field applied to a spin-1/2 system. Then we get the global $\textit{G-B}$ theorem on the sphere
\begin{eqnarray}\label{Global Gauss-Bonnet}
\frac{1}{2\pi}\oint_{{\mathcal{S}}^2}\textit{K}~\textit{dS} = \chi({\mathcal{S}}^2).
\end{eqnarray}
\begin{figure}
\caption{$($Color online$)$ Euler characteristic for a torus $($doughnut$)$ and a sphere. From the torus's point of view, the Gauss curvature is positive when the curving of the surface is elliptic $($the green area$)$. If the parabolic likes a plane $($the red circle$)$, then the Gauss curvature is zero. If the surface stars to show hyperbolic curving such as a saddle $($the blue area$)$, then the Gauss curvature becomes negative. From the sphere's point of view, the Gauss curvature is a positive constant. Intuitively, $\chi(T^2) = 2(1-\mathfrak{g}(T^2))=0$ and $\chi(S^2) = 2(1-\mathfrak{g}(S^2))=2$.}
\label{fig:figure1}
\end{figure}
To catch the significance of the first Chern number $\textit{C}_1$, we need to adiabatically change these parameters around a loop that bounds a sphere ${\mathcal{S}}^2$ to acquire a Berry phase, which can be written as
\begin{eqnarray}\label{quantum geometric tensor relation formula}
\varphi_{\textit{Berry}} = \iint_{{\mathcal{S}}^2}\textit{F}_{\mu\nu}\textit{dS}_{\mu\nu} = \iint_{{\mathcal{S}}^2}\vec{\textit{F}}\cdot\textit{d}\vec{\textit{S}},
\end{eqnarray}
where $\textit{dS}_{\mu\nu}$ is a directed surface element, $\vec{\textit{S}}$ is a vector normal to the sphere ${\mathcal{S}}^2$ and $\vec{\textit{F}}$ is a vector known as the Berry curvature analogous to the magnetic field in electromagnetism, which is given by the off-diagonal components of the electromagnetic tensor $\textit{F}_{\mu\nu}$, see in Eq.~$($\ref{Berry curvature}$)$. For example, the Berry curvatures $\textit{F}^{(N)}_{\theta\phi}$ and $\textit{F}^{(S)}_{\theta\phi}$ only have off-diagonal components.
As we all know by now, Berry phase depends on the $\textit{U}(1)$ local gauge choice $|\psi_{i}\rangle\rightarrow \textit{e}^{i\varphi(\theta,\phi)}|\psi_{i}\rangle$, where $|\psi_{i}\rangle$ is a certain eigenstate in this paper $($subscript \textit{i} = 0, 1$)$, showing that the Berry curvature is gauge invariant. Therefore, we obtain the integral
\begin{eqnarray}\label{the first Chern number}
\textit{C}_1 = \frac{1}{2\pi}\oint_{{\mathcal{S}}^2}\textit{F}_{\mu\nu}\textit{dS}_{\mu\nu} = \frac{1}{2\pi}\oint_{{\mathcal{S}}^2}\vec{\textit{F}}\cdot\textit{d}\vec{\textit{S}},
\end{eqnarray}
is a kind of robust topological invariant known as the first Chern number, and it could be viewed as counting the number of times an eigenstate circles around a sphere in the Hilbert space~\cite{SKKSGVPPL-2014}.
\subsection{Measuring the Berry curvature}
\label{sec: Measuring the Berry curvature}
In analogy to electrodynamics, the local gauge-dependent Berry connection $\textit{A}_{\mu}$ can never be physically observed, while Berry curvature $\textit{F}_{\mu\nu}$ is gauge-invariant and may be related to a physical observable that manifests the local geometric property of the eigenstates in the parameter space. The first Chern number reveals the global topological property of such a Hamiltonian manifold. In fact, $\textit{C}_1$ exactly counts the number of degenerate points enclosed by parameter space $\mathcal{S}^2$, see in Appendix~{II} of the Supplementary data, where we endow it with physical meaning by using the conception of the magnetic monopole. We substitute $\textit{A}_{\mu}$ into $\textit{F}_{\mu\nu}(\vec{\textit{F}})$ and rewrite the Berry curvature as
\begin{eqnarray}\label{large Berry curvature}
\textit{F}_{\mu\nu} = i\sum_{n\neq0}\frac{\langle \psi_{0}|\partial_{\mu}\hat{H}|\psi_{n}\rangle\langle \psi_{n}|\partial_{\nu}\hat{H}|\psi_{0}\rangle -(\nu\leftrightarrow\mu)}{(\textit{E}_n-\textit{E}_0)^2},~~~~~~~~~
\end{eqnarray}
where $\textit{E}_n$ and $|\psi_{n}\rangle$ are the $\textit{n}$-th eigenvalue and its corresponding eigenstate of the Hamiltonian $\hat{H}$, respectively. Eq.~$($\ref{large Berry curvature}$)$ indicates that degeneracies are some singularities that will contribute nonzero terms to $\textit{C}_1$ in Eq.~$($\ref{the first Chern number}$)$.
In order to extract the Chern number of closed manifolds in the parameter space of the two-level system Hamiltonian, we analytically describe a simple topological structure of a superconducting qubit driven by a microwave field. In Ref.~\cite{VGAP-2012}, it states that Berry curvature can be extracted from the linear response of the qubit to nonadiabatic manipulations of its Hamiltonian $\hat{H}$$($$\mu = \theta$, $\nu = \phi$$)$, which leads to a general force $\textit{M}_\phi\equiv-\langle\psi_{0}(t)|\partial_{\phi}\hat{H}|\psi_{0}(t)\rangle$, given by ~\cite{VGAP-2012,MVBJMR-1993,SKKSGVPPL-2014}
\begin{equation}\label{general_force}
\textit{M}_\phi = \textrm{const} + \upsilon_\theta\textit{F}_{\theta\phi}+\mathcal{O}(\upsilon^2),
\end{equation}
where $\upsilon_\theta$ is the rate of change of the parameter $\theta$ $($quench velocity$)$ and $\textit{F}_{\theta\phi}$ is a component of the Berry curvature tensor. To neglect the nonlinear term, the system parameters should be ramped slowly enough or quasi-adiabaticly.
\section{From Dirac monopole to Wu-Yang monopole} \label{sec: From Dirac monopole to Wu-Yang monopole}
In order to discuss in more detail about Dirac monopole, we first consider a monopole with the magnetic field sitting at the origin
\begin{eqnarray}\label{div_B}
\nabla\cdot \vec{\textit{B}}= 4\pi\textit{g}\delta(\vec{\textit{r}}).
\end{eqnarray}
It follows from $\nabla^2(1/\textit{r})=-4\pi\delta(\vec{\textit{r}})$ and $\nabla(1/\textit{r})=-\vec{\textit{r}}/{{\textit{r}}^3}$ that the solution of this equation is
\begin{eqnarray}\label{B}
\vec{\textit{B}}= \vec{\textit{F}}(r,\theta,\phi) = {\textit{g}\vec{\textit{r}}}/{{\textit{r}}^3},
\end{eqnarray}
where $\textit{g} = \mp 1/2$. The magnetic flux $\Phi$ is obtained by integrating over a sphere $\mathcal{S}^2$ of radius \textit{r} so that
\begin{eqnarray}\label{Phi}
\Phi = \oint_{\mathcal{S}^2}\vec{\textit{B}}\cdot\textit{d}\vec{\textit{S}} = 4\pi\textit{g}.
\end{eqnarray}
But if $\vec{\textit{B}} = \nabla\times\vec{\textit{A}}$, this integral would have to vanish. Thus magnetic vector potential $\vec{\textit{A}}$ cannot exist everywhere on $\mathcal{S}^2$, even though $\nabla\cdot \vec{\textit{B}}$ is only non-zero at the origin, and the best we can do is to find an $\vec{\textit{A}}$ defined everywhere except on a line joining the origin to infinity, such that $\vec{\textit{B}} = \nabla\times\vec{\textit{A}}$. To see this is possible, it may reasonably consider the field due to an infinitely long and thin solenoid placed along the negative \textit{z} axis with its positive pole which has strength \textit{g} at the origin~\cite{PGDIO-1978}. For example, let us introduce the singular vector potential
\begin{eqnarray}\label{singular vector potential}
\textit{A}_r = \textit{A}_{\theta} = 0,~~~~\textit{A}_{\phi} = \frac{\textit{g}(1-\cos\theta)}{\textit{r}\sin\theta},
\end{eqnarray}
and verify that
\begin{eqnarray}\label{curl_A}
\nabla\times \vec{\textit{A}} = {\textit{g}\vec{\textit{r}}}/{{\textit{r}}^3} + \vec{\textit{B}}_s,
\end{eqnarray}
where $\vec{\textit{B}}_s$ is the singular vector field along \textit{z}-axis, with the expression
\begin{eqnarray}\label{singular vector field}
\vec{\textit{B}}_s=
\begin{cases}
4\pi\textit{g}\delta(\textit{x})\delta(\textit{y})\theta(\textit{z}),&\textit{z}<0,\theta=\pi \\
0,&\textit{z}>0,\theta = 0 \\
\end{cases}.
\end{eqnarray}
The singularity along the \textit{z}-axis is called the Dirac string and reflects the poor choice of the coordinate system, as is shown in Fig.~\ref{fig:figure2a_2b}$($a$)$. This magnetic field differs from $\vec{\textit{B}}$ only by the singular magnetic flux along the solenoid but it is clearly source-free; while at the origin, $\vec{\textit{B}}$ vanishes. Thus it may be
represented by a vector potential, $\vec{\textit{A}}$ $($say$)$, everywhere and we may write
\begin{eqnarray}\label{curl_B2}
\vec{\textit{B}} = \nabla\times \vec{\textit{A}} - \vec{\textit{B}}_s.
\end{eqnarray}
\begin{figure}
\caption{$($Color online$)$ From Dirac monopole to Wu-Yang monopole. (a) Dirac monopole. Maxwell's equations can accommodate magnetic monopoles, due to quantum mechanics, it is always possible to create a magnetic field emerging from a point by importing the field from far distance to the point through an infinitely thin physically undetectable magnetic flux tube, which is called the Dirac string. From the endpoint of the string, magnetic field lines emerge radially outwards in the same way as electric field lines emerge from an electric point charge, so that the endpoint acts as a magnetic monopole. (b) Wu-Yang monopole. By selecting different coordinate systems to eliminate the singularity of Dirac string.}
\label{Dirac_Wu_monopole}
\label{fig:figure2a_2b}
\end{figure}
Now, let us describe how these monopoles differ from the standard Dirac monopoles. Under the condition of quantum excited state manifold, and from Eq.~$($3$)$ and Eq.~$($4$)$ in Appendix~{I} of the Supplementary data, we obtain the magnetic field of the south monopole
\begin{eqnarray}\label{Berry curvature2}
\textit{F}_{\theta\phi}^{(S)}
= -2\mathrm{Im}[\textit{Q}_{\theta\phi}]
=\frac{1}{2}
\left( \begin{array}{cc}
0 & -\sin\theta\\
\sin\theta & 0\\
\end{array}
\right).
\end{eqnarray}
The corresponding Berry curvature $\vec{\textit{F}}^{(S)} = -1/2\sin\theta \textit{d}\theta\wedge\textit{d}\phi$ is a symplectic form on ${\mathcal{S}}^2$. If we transform it to the Coulomb-like magnetic field
\begin{eqnarray}\label{field strength}
\vec{\textit{F}}^{(S)}(r,\theta,\phi) = {\textit{g}_{\textit{S}}\vec{\textit{r}}}/{{\textit{r}}^3} = -{\vec{\textit{r}}}/{2{\textit{r}}^3},
\end{eqnarray}
it turns out to be the magnetic field originating from a monopole located at the origin with magnetic charge $\textit{g}_{\textit{S}} = -1/2$~\cite{DCAJ-2004}.
Similarly, if we take another eigenstate which corresponds to the quantum ground state manifold
\begin{eqnarray}\label{qubit2}
|\psi_{0}(\theta, \phi)\rangle = -\sin({\theta}/{2})|0\rangle + \textit{e}^{i\phi}\cos({\theta}/{2})|1\rangle,
\end{eqnarray}
where we set $\sin({\theta}/{2}) = -\frac{\Omega}{2}\big/{\sqrt{\frac{\Omega^2}{4} + (\textit{E}_0 - \frac{\Delta}{2})^2}}$, and $\cos({\theta}/{2}) = -(\textit{E}_0 - \frac{\Delta}{2})\big/{\sqrt{\frac{\Omega^2}{4} + (\textit{E}_0 - \frac{\Delta}{2})^2}}$, then we have the magnetic field of the north monopole
\begin{eqnarray}\label{Berry curvature3}
\textit{F}^{(N)}_{\theta\phi}&=&\frac{1}{2}
\left( \begin{array}{cc}
0 & \sin\theta\\
-\sin\theta & 0\\
\end{array}
\right).~
\end{eqnarray}
The corresponding Berry curvature is $\vec{\textit{F}}^{(N)} = 1/2\sin\theta \textit{d}\theta\wedge\textit{d}\phi$, and the magnetic field
\begin{eqnarray}\label{field strength2}
\vec{\textit{F}}^{(N)}(r,\theta,\phi) = \textit{g}_{\textit{N}}\vec{\textit{r}}/{{\textit{r}}^3} = {\vec{\textit{r}}}/{2{\textit{r}}^3},
\end{eqnarray}
with the magnetic charge $\textit{g}_{\textit{N}} = 1/2$.
T. T. Wu and C. N. Yang~\cite{YCN-1975} noticed that it may employ more than one vector potential to describe monopoles. For example, we may avoid singularities if we adopt $\vec{\textit{A}}_{\textit{N}}$ in the northern hemisphere and $\vec{\textit{A}}_{\textit{S}}$ in the southern hemisphere of the sphere ${\mathcal{S}^2}$ surrounding the monopole,
as depicted in Fig.~\ref{fig:figure2a_2b}$($b$)$. It shows that the vector potential $\vec{\textit{A}}_{\textit{N}}$ in region of ${\mathcal{S}_{\textit{N}}^2}$ can be expressed as
\begin{eqnarray}\label{singular vector potential}
\begin{split}
(\textit{A}_r)_{\textit{N}} = (\textit{A}_{\theta})_{\textit{N}} = 0,~~~~~
(\textit{A}_{\phi})_{\textit{N}} = \frac{\textit{g}(1-\cos\theta)}{\textit{r}\sin\theta},
\end{split}
\end{eqnarray}
and the vector potential $\vec{\textit{A}}_{\textit{S}}$ in region of ${\mathcal{S}_{\textit{S}}^2}$ can be expressed as
\begin{eqnarray}\label{singular vector potential2}
\begin{split}
(\textit{A}_r)_{\textit{S}} = (\textit{A}_{\theta})_{\textit{S}} = 0,~~~~~
(\textit{A}_{\phi})_{\textit{S}} = -\frac{\textit{g}(1+\cos\theta)}{\textit{r}\sin\theta}.
\end{split}
\end{eqnarray}
Obviously, the two vector potentials yield the magnetic field $\vec{\textit{B}}$ = $\textit{g}\vec{\textit{r}}/{{\textit{r}}^3}$, which is non-singular everywhere on the sphere~\cite{MN-1998}.
Need of special note is that the magnetic monopoles we simulate here are the Abelian Wu-Yang monopoles, which by selecting different coordinate systems to eliminate the singularity of Dirac string. The two coordinate systems are characterized by the choice of two different Berry curvatures, see more in Appendix~{II} of the Supplementary data.
\section{Physical Model for Implementation} \label{sec: Physical Model for Implementation}
\begin{figure}
\caption{$($Color online$)$ Energy spectrum of the superconducting transmon qubit. Here we assume the qubit is effectively a nonlinear resonator, with a transition frequency of $\omega_q = 4.395$ GHz and the anharmonicity of 280 MHz, to ensure that the qubit transition only occurs between the ground state and the first excited state~\cite{KYGHSMBDGS-2007}.}
\label{fig:figure3}
\end{figure}
As we have mentioned above, the degenerate points emerging from the Berry curvature $\textit{F}_{\mu\nu}$ act as the sources $($the north magnetic charge $\textit{g}_{\textit{N}}$$)$ and sinks $($the south magnetic charge $\textit{g}_{\textit{S}}$$)$ of $\textit{C}_1 (\pm 1)$ and are analogous to Wu-Yang monopoles in parameter space. We reconsider the proposal that use a superconducting transmon qubit described in Ref.~\cite{SKKSGVPPL-2014}. As seen in Fig.~\ref{fig:figure3}, where an anharmonicity of 280 MHz makes the qubit an effective two-level system in the parameter scope. In the rotating frame of a microwave drive with frequency $\omega_m$, the Hamiltonian for the qubit can be written as $($$\hbar\equiv$1$)$~\cite{FGCBV-2002,KYGHSMBDGS-2007}
\begin{eqnarray}\label{Hamiltonian}
\hat{H}
&=& 1/{2}[\Delta\hat{\sigma}_z+\Omega\hat{\sigma}_x\cos\phi+\Omega\hat{\sigma}_y\sin\phi],
\end{eqnarray}
where $\Delta = \omega_m-\omega_q$, $\hat{\sigma}_i (i =x,y,z)$ is the Pauli spin matrix, $\phi$ and $\Omega$ are the phase of the drive tone and the amplitude of the drive tone as the Rabi frequency, respectively.
\begin{figure}\label{fig:figure4a_4b}
\end{figure}
By changing these parameters $($$\Delta$ and $\Omega$$)$, we can create arbitrary single-qubit Hamiltonians that can be represented in terms of a set of parameters as an ellipsoidal manifold. The eigenstates of this Hamiltonian are
\begin{eqnarray}\label{eigen_states1}
|\psi_0\rangle
&=& \frac{\Omega/2~|0\rangle}{\sqrt{\frac{\Omega^2}{4} + (\textit{E}_0 - \frac{\Delta}{2})^2}} - e^{i\phi}\frac{(\textit{E}_0 - \Delta/2)|1\rangle}{\sqrt{\frac{\Omega^2}{4} + (\textit{E}_0 - \frac{\Delta}{2})^2}},~~~~~~
\end{eqnarray}
\begin{eqnarray}\label{eigen_states2}
|\psi_1\rangle
&=& \frac{\Omega/2~|0\rangle}{\sqrt{\frac{\Omega^2}{4} + (\textit{E}_1 - \frac{\Delta}{2})^2}} + e^{i\phi}\frac{(\textit{E}_1 - \Delta/2)|1\rangle}{\sqrt{\frac{\Omega^2}{4} + (\textit{E}_1 - \frac{\Delta}{2})^2}},~~~~~~
\end{eqnarray}
where $|0\rangle = |\textit{e}\rangle = (1, 0)^{\textit{T}}$ is the excited state and $|1\rangle = |\textit{g}\rangle = (0, 1)^{\textit{T}}$ is the ground state. The corresponding eigenvalues of the eigenstates $|\psi_{1(0)}\rangle$ are ${\textit{E}}_{1(0)} = \pm\frac{1}{2}\sqrt{\Omega^2+\Delta^2}$. We notice that for $\textit{E}_1 = \textit{E}_0$, Eq.~$($\ref{large Berry curvature}$)$ clearly shows that degeneracies are some singular points that will contribute nonzero terms to $\textit{C}_1$ in Eq.~$($\ref{the first Chern number}$)$. In particular, with the choice
\begin{eqnarray}\label{parameters}
\Delta = \Delta_1\cos\theta + \Delta_2, ~~~~~~~~~~\Omega = \Omega_n\sin\theta,
\end{eqnarray}
the Hamiltonian can be presented in parameter space as an ellipsoidal manifold with cylindrical symmetry about the $z$-axis~\cite{SKKSGVPPL-2014}. Here, we set ellipsoids of size $\Delta_1 = 2\pi\times30$ MHz, and $\Omega_n = 2\textit{n}\pi\times10$ MHz $($\textit{n} = 1, 2, 3$)$, and vary $\Delta_2/(2\pi)$ between $-60$ and $60$ MHz. The topological properties are independent of deformations of the manifold that includes the degenerate point and the choice of these particular parameters does not really matter.
\begin{figure}
\caption{$($Color online$)$ The Berry curvature measured as a function of $\Delta_2/\Delta_1$. In the region of $|\Delta_2/\Delta_1| < 1$, (a) shows the Berry curvature is positive $($the green part$)$, accompanied with the ground state evolution, while (b) shows the curvature is negative $($the blue part$)$, accompanied with the excited state evolution, and it disappears at the dashed line with $\Delta_2/\Delta_1 = 1$ and $\Omega_1 = 2\pi\times10$ MHz.}
\label{fig:figure5a_5b}
\end{figure}
Fig.~\ref{fig:figure4a_4b}$($a$)$ depicts an implementable pulse sequence used to measure the Berry curvature. We respectively initialize the qubit in its bare ground state $|\textit{g}\rangle$ and bare excited state $|\textit{e}\rangle$ at $\theta(\textit{t} = 0) = 0 $ $($this method works for arbitrary eigenstates of the initial Hamiltonian, so the particular state targeted is irrelevant$)$, fix $\phi(\textit{t}) = 0$, and linearly ramp the angle $\theta(\textit{t}) = \pi\textit{t}/\textit{t}_{\textrm{ramp}}$ in time, stopping the ramp at various times $\textit{t}_{\textrm{meas}}\leq\textit{t}_{\textrm{ramp}}$ to execute qubit tomography. From Eq.~$($\ref{large Berry curvature}$)$, the Berry curvature reads
\begin{eqnarray}\label{Measure_Berry_curvature}
\textit{F}_{\theta\phi} = \frac{\langle\partial_\phi\hat{H}\rangle}{\upsilon_\theta} = \frac{\Omega_n\sin\theta}{2\upsilon_\theta}\langle\hat{\sigma}_y\rangle,
\end{eqnarray}
where $\upsilon_\theta = \dot{\theta}(\textit{t}) = \pi/\textit{t}_{\textrm{ramp}}$. Fig.~\ref{fig:figure4a_4b}$($b$)$ shows the results of different Berry curvatures with $\Omega_1$ and $\Omega_2$, respectively, for a protocol with $\textit{t}_{\textrm{ramp}} = 1 \mu$s and $\Delta_2 = 0$. We extract the Berry curvatures $\textit{F}_{\theta\phi}$ from the measured values of $\langle\textit{g}|\hat{\sigma}_y|\textit{g}\rangle$ and $\langle\textit{e}|\hat{\sigma}_y|\textit{e}\rangle$. The Berry curvature is positive when the curving of the surface is elliptic. The sharper the elliptic curving, the greater the Berry curvature. And if the surface starts to show hyperbolic such as a saddle, then the Berry curvature becomes negative, and the sharper the hyperbolic curving of the surface, the smaller the Berry curvature, just the same as the Gauss curvature in Fig.~\ref{fig:figure1}.
\begin{figure}\label{fig:figure6a_6c}
\end{figure}
To induce a topological transition in the qubit, the detuning offset $\Delta_2$ is first changed. At the same time, the ground and excited states evolution are quantitatively modified. But for $|\Delta_2| < |\Delta_1|$, the corresponding Berry curvature as we see in Fig.~\ref{fig:figure5a_5b}$($a$)$ shows the Berry curvature acts like the magnetic field produced by a north magnetic charge $\textit{g}_{\textit{N}}$ $($sources$)$, while Fig.~\ref{fig:figure5a_5b}$($b$)$ shows that it acts like the magnetic field produced by a south magnetic charge $\textit{g}_{\textit{S}}$ $($sinks$)$. The scale of the Berry curvature corresponds to the strength of the magnetic field and it falls with the square of distance between the manifold and the magnetic poles. However, for $|\Delta_2| > |\Delta_1|$, it gives the zero Berry curvature, meaning that the system undergoes a topological transition at $|\Delta_2| = |\Delta_1|$. Such a transition only occurs when the Berry curvature becomes ill defined at the point $\Delta = \Omega = 0$ in Eq.~$($\ref{large Berry curvature}$)$.
By integrating Eq.~$($\ref{Measure_Berry_curvature}$)$, we obtain the first Chern number
\begin{eqnarray}\label{Measure_Chern_number}
\textit{C}_1 = \frac{1}{2\pi}\int_0^{\pi}\textit{d}\theta\int_0^{2\pi}\textit{d}\phi\textit{F}_{\theta\phi} = \int_0^{\pi}\textit{F}_{\theta\phi}\textit{d}\theta.
\end{eqnarray}
The measured Chern number $\textit{C}_1$ is plotted in Fig.~\ref{fig:figure6a_6c}$($b$)$, showing a relatively sharp transition at the expected value $|\Delta_2/\Delta_1| = 1$. We find that the topological transition in the elliptical manifold $($the green line$)$ is sharper $($faster$)$ than that in the sphere manifold $($the red line$)$ shown in Fig.~\ref{fig:figure6a_6c}$($b$)$, and it shows that the topological invariant $\textit{C}_1$ is strongly robust against variations in Hamiltonian parameters, such as in Rabi frequency $\Omega_n$ and in detuning $\Delta_1$. The topological transition corresponds to degeneracies moving from outside to inside and again outside the elliptical manifold. In other words, the Chern number is nonzero as long as there exists Berry curvature. From this point of view, we can set up the corresponding relation between topological invariants and magnetic monopoles~\cite{PM-2009,SKKSGVPPL-2014,PR-2014}. Then we can draw such a conclusion, as shown in Fig.~\ref{fig:figure6a_6c}, with a formula~\cite{DCAJ-2004}
\begin{eqnarray}\label{Final_Chern_number1}
\textit{C}_1 = \textmd{magnetic number}= \pm1,
\end{eqnarray}
where ``1'' is the number of the degeneracy points in parameter space of the Hamiltonian, and the sign ``$\pm$'' corresponds to the polarity of the magnetic charge in parameter space $($$C_1$ = $+1$ $\leftrightarrow$ $\textit{g}_{\textit{N}}$, $C_1$ = $-1$ $\leftrightarrow$ $\textit{g}_{\textit{S}}$$)$.
\section{Results and Discussion} \label{sec: Results}
In Eq.~$($\ref{parameters}$)$, $\theta = 0$ and $\pi$ corresponds to $\Delta = \Delta_1+\Delta_2$ and $\Delta = -\Delta_1+\Delta_2$, respectively. For the case with $\Delta = 0$ and $\Omega\neq0$, i.e., the microwave drive induces the resonant transition between the two states $|0\rangle$ and $|1\rangle$ of the qubit, the two eigenstates in Eq.~$($\ref{eigen_states1}$)$ and Eq.~$($\ref{eigen_states2}$)$ become a degenerate state $|\psi_s\rangle = \frac{1}{\sqrt{2}}(|\textit{e}\rangle + |\textit{g}\rangle)$.
Based on this point, we track and investigate the change of the quantum states accompanied with the change of the Berry curvatures. In Fig.~\ref{fig:figure7a_7d}, the fidelity of the target state $|\textit{g}\rangle$ and $|\textit{e}\rangle$ is plotted versus $\theta/\pi$ and $\Delta_2/\Delta_1$, where the fidelity is defined as $\textit{f} = \langle\psi_j|\hat{\rho}(t_f)|\psi_j\rangle$ ($j=0,1$). We note that the quantum state flips at $\Delta_2/\Delta_1 = -1$, when the monopole in parameter space passes from outside to inside the spherical manifold, except the area where the Berry curvatures $($the magnetic fields$)$ exist. However, the quantum state does not flip at $\Delta_2/\Delta_1 \geq 1$, because of the Berry curvatures no longer exist in the manifold and the Gauss theorem of magnetic field turns into Stokes theorem, see in $($a$)$ and $($c$)$ of Fig.~\ref{fig:figure6a_6c}.
\begin{figure}
\caption{$($Color online$)$ The fidelity of the target states versus $\theta/\pi$ and $\Delta_2/\Delta_1$. The initial state $|\textit{e}\rangle$ $($will evolve within $|\psi_0\rangle$$)$ is set in (a) and (c). The initial state $|\textit{g}\rangle$ $($will evolve within $|\psi_1\rangle$$)$ is set in (b) and (d). In (a) and (b), the density matrix of the target state is $\hat{\rho}(t_f) = |\textit{g}\rangle\langle\textit{g}|$, while in (c) and (d) the density matrix is $\hat{\rho}(t_f) = |\textit{e}\rangle\langle\textit{e}|$. The parameter chosen here are $\Delta_1 = 2\pi\times30$ MHz, $\Omega_1 = 2\pi\times10$ MHz, and $\Delta_2$ ramps from $-2\Delta_1$ to $2\Delta_1$. The Berry curvature only has relatively strong influence around $|\Delta_2/\Delta_1| = 1$ which is shown circled in (a) and (c).}
\label{fig:figure7a_7d}
\end{figure}
We note that, the strength of the magnetic field $($Berry curvature$)$ has an apparent impact on the quantum state $|\psi_0\rangle$ in $($b$)$ and $($d$)$ of Fig.~\ref{fig:figure7a_7d}, while it only has relatively strong influence on the state $|\psi_1\rangle$ around $|\Delta_2/\Delta_1| = 1$ in $($a$)$ and $($c$)$ of Fig.~\ref{fig:figure7a_7d} $($the dashed circle$)$.
In order to illustrate the change of the quantum states in the process of topological transition in more detail, we choose a special position at $\theta = \pi$, and thus get $\Delta_1 = \Delta_2$. For such a case, the initial state evolves to the degenerate state $|\psi_s\rangle$. Fig.~\ref{fig:figure8a_8d}$($a$)$ depicts the status of quantum states in $($a$)$ and $($d$)$ of Fig.~\ref{fig:figure7a_7d}, and Fig.~\ref{fig:figure8a_8d}$($b$)$ depicts the status of quantum states in $($b$)$ and $($c$)$ of Fig.~\ref{fig:figure7a_7d}. From Fig.~\ref{fig:figure8a_8d}$($c$)$ and Fig.~\ref{fig:figure8a_8d}$($d$)$, we note that the fidelity is fluctuating around $\Delta_2/\Delta_1 = 1$. We attribute this interesting phenomenon to the influence of the magnetic fields resulting from the magnetic charges. When the charges pass from inside to outside the Hamiltonian manifold, the quantum states influenced by the Berry curvatures will cause ripples in the Hilbert space, a detailed discussion will be presented in the future works. While for the position at $\Delta_2/\Delta_1 = -1$, there is no such apparent fluctuating because the quantum states still have not been affected by the magnetic field. More vividly speaking, the quantum states have not yet been ``magnetized'' by the magnetic monopoles. Actually, according to these phenomena, we find a new way to control the evolution of system quantum states by manipulating $($moving$)$ the monopoles $($degenerate points$)$ in the manifolds.
\begin{figure}
\caption{$($Color online$)$ The fidelity of the target states versus $\Delta_2/\Delta_1$ at $\theta = \pi$. (a) The fidelity: $\big|\langle\psi_1|\textit{g}\rangle|^2$ or $\big|\langle\psi_0|\textit{e}\rangle|^2$. (b) The fidelity: $\big|\langle\psi_0|\textit{g}\rangle|^2$ or $\big|\langle\psi_1|\textit{e}\rangle|^2$. (c) The fidelity of the degenerate state: $\big|\langle\psi_0|\psi_s\rangle|^2$. (d) The fidelity of the degenerate state: $\big|\langle\psi_1|\psi_s\rangle|^2$. }
\label{fig:figure8a_8d}
\end{figure}
Hereinbefore upwards, our main consideration about how to simulate Abelian Wu-Yang monopoles in parameter space just relies on a driven superconducting qubit. However, this general method also could be simulated by other spin-1/2 systems, such as a NMR system in a synthetic magnetic field. A simple experimental scheme is shown in Appendix~{III} of the Supplementary data.
\section{Conclusion}
We have simulated the Abelian Wu-Yang monopoles in parameter space of the Hamiltonian of a superconducting qubit controlled by a microwave drive for both geometry $($Berry curvature$)$ and topology $($Chern number$)$. The topological structure of the qubit can be captured by the distribution of Berry curvature, which describes the geometry of the eigenstates of the Hamiltonian. We note that during the process of topological transition, the Berry curvature and the fidelity of quantum states have some interesting correlations due to the influence of the magnetic fields resulting from the magnetic charges. We also note that the quantum state flips at the position where the topological transition occurs, when the monopole in parameter space passes from outside to inside and again outside the spherical manifold, except the area where the Berry curvatures $($the magnetic fields$)$ exist. This phenomenon might provide a promising perspective to flexibly manipulate the qubit states by designing the specific synthetic magnetic fields.
Degenerate points in parameter space of the Hamiltonian act as the sources $($sinks$)$ of $\textit{C}_1$ and are analogues to magnetic monopoles. We also note that the transition of quantum states is asymmetric during the process when the monopole passes from outside to inside and again outside the Hamiltonian manifold. For example, when the monopole passes from inside to outside the Hamiltonian manifold, the quantum states influenced by the Berry curvatures cause ripples in the Hilbert space. However, when the monopole passes from outside to inside the Hamiltonian manifold, there is no such apparent fluctuating. We give a preliminary explanation to this interesting phenomenon by introducing the notion of magnetization of the magnetic charges. This method also can be simulated by other spin-1/2 systems. For example, it can be extended to NMR systems and it is possible to experimentally investigate more intriguing properties of multi-monopoles. This could thus be used to construct new kinds of devices based on synthetic magnetic fields.
\begin{acknowledgments}
The authors would like to acknowledge insightful discussions with S. B. Zheng. This work was supported by the National Natural Science Foundation of China under Grants No.11405031, No.11305037, No.11374054, and No.11347114, the Natural Science Foundation of Fujian Province under Grants No.2014J05005, and the fund from Fuzhou University. \end{acknowledgments}
\end{document}
|
arXiv
|
A circle has an area of $\pi$ square units. What is the length of the circle's diameter, in units?
Call the length of the radius $r$ units. $r^2\pi=\pi$, so $r=1$. The diameter is twice the radius, or $\boxed{2}$ units.
|
Math Dataset
|
Canon arithmeticus
The Canon arithmeticus is a set of mathematical tables of indices and powers with respect to primitive roots for prime powers less than 1000, originally published by Carl Gustav Jacob Jacobi (1839). The tables were at one time used for arithmetical calculations modulo prime powers, though like many mathematical tables they have now been replaced by digital computers. Jacobi also reproduced Burkhardt's table of the periods of decimal fractions of 1/p, and Ostrogradsky's tables of primitive roots of primes less than 200, and gave tables of indices of some odd numbers modulo powers of 2 with respect to the base 3 (Dickson 2005, p.185–186).
Although the second edition of 1956 has Jacobi's name on the title, it has little in common with the first edition apart from the topic: the tables were completely recalculated, usually with a different choice of primitive root, by Wilhelm Patz. Jacobi's original tables use 10 or –10 or a number with a small power of this form as the primitive root whenever possible, while the second edition uses the smallest possible positive primitive root (Fletcher 1958).
The term "canon arithmeticus" is occasionally used to mean any table of indices and powers of primitive roots.
References
• Dickson, Leonard Eugene (2005) [1919], History of the theory of numbers. Vol. I: Divisibility and primality., New York: Dover Publications, ISBN 978-0-486-44232-7, MR 0245499
• Fletcher, A. (1958), "Canon Arithmeticus by C. G. J. Jacobi; H. Brandt", The Mathematical Gazette, Review, The Mathematical Association, 42 (339): 76–77, doi:10.2307/3608400, ISSN 0025-5572, JSTOR 3608400, S2CID 246262528
• Jacobi, Carl Gustav Jacob (1839), Canon arithmeticus, sive tabulae quibus exhibentur pro singulis numeris primis vel primorum potestatibus infra 1000 numeri ad datos indices et indices ad datos numeros pertinentes, Berlin: Typis Academicis, Berolini, MR 0081559
• Jacobi, Carl Gustav Jacob (1956) [1839], Brandt, Heinrich; Patz, Wilhelm (eds.), Canon arithmeticus, Mathematische Lehrbücher und Monographien: Mathematische Monographien, vol. 2, Berlin: Akademie-Verlag, MR 0081559
See also
• A. W. Faber Model 366, a discrete slide rule incorporating similar concepts to the Canon arithmeticus
|
Wikipedia
|
DOURNAC.ORG
Sciences > Correction of Probability Examination - Subatomic Physics Master's degree
1.Gauss's law and Characteristic function
2.Transfer Theorem
3.Expectation and Statistical Variance in general case
4.Making two no-correlated random variables
5.Independance and no-Correlation of two random variables
6.Maximum likelihood - 2 measure samples
7.Maximum likelihood - 2 estimations with error
8.Monte-Carlo method
9.Generator of uniform law
10.Chi2 method
11.Chi2 method with measure errors
12.Confidence level
Exercise 1.1 :
We consider a random variable $x$ on $]-\infty ,+\infty [$ following a Gauss's law with $\mu $ and $\sigma $ parameters.
- Give the expression of Gauss's law.
- Give the conditions on parameters to make this law be a probability distribution function (pdf).
- What is the Characteristic function (definition and expression) ?
- Do the same for cumulative function.
Reminder :
\begin{equation} \text{erf}(z)=\dfrac{2}{\sqrt{\pi}}\,{\large\int}_{0}^{z}\,exp(-t^{2})dt \end{equation}
Let $x$ be a random variable defined on $]-\infty ,+\infty [$ and following a Normal distribution (Gaussian).
Distribution function is :
\begin{equation} f(x)=\dfrac{1}{\sqrt{2\pi}\sigma}\,exp\bigg(-\dfrac{1}{2}\big(\dfrac{x-\mu}{\sigma}\big)^{2}\bigg) \end{equation}
Conditions and parameters to make this formula be a pdf :
Characteristic function :
\begin{equation} \Phi(t)={\large\int}_{-\infty}^{+\infty}\,f(x)\,e^{itx}dx \end{equation}
\begin{equation} \Longrightarrow
\Phi(t)=\dfrac{1}{\sqrt{2\pi}\sigma}\,{\large\int}_{-\infty}^{+\infty}\,exp\bigg(-\dfrac{1}{2}\big(\dfrac{x-\mu}{\sigma}\big)^{2}+itx\bigg)dx \end{equation}
We do the following substitution variable :
\begin{equation} x=\sqrt{2}\sigma\,X+\mu \Rightarrow dx=\sqrt{2}\sigma\,dX \end{equation}
\Phi(t)=\dfrac{1}{\sqrt{\pi}}\,e^{it\mu}\,{\large\int}_{-\infty}^{+\infty}\,e^{-X^{2}}\,e^{it\sqrt{2}\sigma\,X}dX
\label{eq6} \end{equation}
We use this equality (demonstration can be found on Web) :
\begin{equation} {\large\int}_{-\infty}^{+\infty}\,e^{-t^{2}}\,e^{zt}dt=\sqrt{\pi}e^{z^{2}/4} \end{equation}
Taking $z=it\sqrt {2}\sigma $, one gets :
\begin{equation} \Phi(t)=exp(it\mu)\,exp(-\dfrac{t^{2}\sigma^{2}}{2}) \end{equation}
Cumulative function :
\begin{equation} F(x)=P(X\,\leq\,x)={\large\int}_{-\infty}^{x}\,f(x')dx'={\large\int}_{-\infty}^{\mu}\,f(x')dx'
+{\large\int}_{\mu}^{x}\,f(x')dx' \end{equation}
\begin{equation} F(x)={\large\int}_{-\infty}^{\mu}\,\dfrac{1}{\sqrt{2\pi}\sigma}\,exp\bigg(-\dfrac{1}{2}\big(\dfrac{x'-\mu}{\sigma}\big)^{2}\bigg)dx'+
{\large\int}_{\mu}^{x}\,\dfrac{1}{\sqrt{2\pi}\sigma}\,exp\bigg(-\dfrac{1}{2}\big(\dfrac{x'-\mu}{\sigma}\big)^{2}\bigg)dx' \end{equation}
Another substitution variable $x^"=x'-\mu $ gives :
\begin{equation} {\large\int}_{-\infty}^{0}\,\dfrac{1}{\sqrt{2\pi}\sigma}\,exp\bigg(-\dfrac{1}{2}\big(\dfrac{x^"}{\sigma}\big)^{2}\bigg)dx^"=\dfrac{1}{2} \end{equation}
For the second integral, doing the substitution $v=\dfrac {x^"}{\sqrt {2}\sigma }$, we have :
\begin{equation} \dfrac{1}{\sqrt{\pi}}\,{\large\int}_{0}^{\frac{x-\mu}{\sqrt{2}\sigma}}\,e^{-v^{2}}dv=\dfrac{1}{2}\,\text{erf}\bigg(\dfrac{x-\mu}{\sqrt{2}\sigma}\bigg) \end{equation}
So the final result :
\begin{equation} F(x)=\dfrac{1}{2}+\dfrac{1}{2}\,\text{erf}\bigg(\dfrac{x-\mu}{\sqrt{2}\sigma}\bigg) \end{equation}
We do another substitution variable $y=exp(x)$, where $x$ is defined above and follows Gauss's law.
- How will you calculate the pdf of $y$ ?
- Give the expression of pdf($y$).
- Calculate the expectation $E(y)$ ; first write its definition.
- Calculate the expression of $V(y)$ (variance of $y$).
We have to find the distribution function of $y$ recorded $g(y)$. Then,
\begin{equation} (\,\,x\in \,\,]-\infty,+\infty[\,\,) \Rightarrow (\,\,y=e^{x} \in \,\,]0,+\infty[\,\,) \end{equation}
Transfert theorem : taking into account that $x=\ln y$ and there is conservation of probabilities with variable substitution, we have :
\begin{equation} f(x)dx=g(y)dy \end{equation}
that implies :
\begin{equation} g(y)=f(x(y))\left|\dfrac{dx}{dy}\right| \end{equation}
So we deduce for $g(y)$ :
\begin{equation} g(y)=\left\{
\begin{array}{ccc}
\dfrac{1}{\sqrt{2\pi}\sigma}\,exp\left(-\dfrac{1}{2}\left(\dfrac{\ln y-\mu}{\sigma}\right)^2\right)\,\dfrac{1}{y}&\text{if}&
y\,>\,0\\
0&\text{if}&y\,\leq\,0
\end{array}
\right. \end{equation}
Expectation $E(y)$ :
\begin{equation} E(y)={\large\int}_{-\infty}^{+\infty}y\,g(y)\,dy={\large\int}_{0}^{+\infty}y\,g(y)\,dy \end{equation}
with $g(y)=0$ if $y\leq 0$.
Given $x=\ln y$ and $dy=e^{x}\,dx$, we get :
\begin{equation} E(y)={\large\int}_{-\infty}^{+\infty}\,e^{x}\,\dfrac{1}{\sqrt{2\pi}\sigma}\,exp\left(-\dfrac{1}{2}\left(\dfrac{x-\mu}{\sigma}\right)^2\right)\,dx \end{equation}
One recognizes equation (6) for characteristic function :
\begin{equation} E(y)=\Phi(it=1)=e^{\mu+\frac{\sigma^{2}}{2}} \end{equation}
Variance $V(y)$ :
\begin{equation} V(y)=E(y^{2})-E(y)^{2} \end{equation}
\begin{equation} E(y^{2})={\large\int}_{-\infty}^{+\infty}\,y^{2}g(y)\,dy \end{equation}
Following the same previous reasoning :
\begin{equation} E(y^{2})={\large\int}_{-\infty}^{+\infty}\,e^{2x}\,\dfrac{1}{\sqrt{2\pi}\sigma}\,exp\left(-\dfrac{1}{2}\left(\dfrac{x-\mu}{\sigma}\right)^2\right)\,dx \end{equation}
\begin{equation} E(y^{2})=\Phi(it=2) \end{equation}
Finally :
\begin{equation} V(y)=exp(2\mu+2\sigma^{2})-exp(2\mu+\sigma^{2}) \end{equation}
We have a set of random variables $x_{i},i=1,...,n$ and we consider the random variable $z=\sum _{i=1}^{n}a_{i}x_{i}$ where $a_{i}$ are constants.
- Write the expectation $E(z)$.
- Write the expression of $V(z)$ (in general case).
Expectation $E(z)$ :
\begin{equation} E(z)=E\big(\sum_{i=1}^{n}a_{i}x_{i}\big)=\sum_{i=1}^{n}a_{i}E(x_{i}) \end{equation}
Variance $V(z)$ :
\begin{equation} V(z)=\sum_{i=1}^{n}a_{i}^{2}V(x_{i}) + \sum_{i \neq j}a_{i}a_{j}\,\text{cov}(x_{i},x_{j})\\
\label{eq-cov} \end{equation}
with $\,\,\text{cov}(X,Y)=E[(X-E(X))(Y-E(Y))]$
One starts from 2 random variables $x$ and $y$ with respectively $\sigma _{x}^{2}$ and $\sigma _{y}^{2}$ standard deviations and $\rho _{xy}$ correlation factor.
We do the following substitution of variable :
$u=x+y\sigma _{x}/\sigma _{y}$
$v=x-y\sigma _{x}/\sigma _{y}$
Calculate $V(u)$ and $V(v)$ variances and covariance $\text{cov}(u,v)$. What can you conclude ? What is the correlation factor $\rho _{uv}$ ?
We use the previous relation (27) so that :
\begin{equation} V(u)=V(x)+(\dfrac{\sigma_{x}}{\sigma_{y}})^{2}\,V(y)+2\dfrac{\sigma_{x}}{\sigma_{y}}\text{cov}(x,y) \end{equation}
\begin{equation} \Rightarrow
V(u)=2\sigma_{x}^{2}+2\sigma_{x}^{2}\rho_{xy}=2\sigma_{x}^{2}(1+\rho_{xy}) \end{equation}
In the same way for $V(v)$ :
V(v)=2\sigma_{x}^{2}-2\sigma_{x}^{2}\rho_{xy}=2\sigma_{x}^{2}(1-\rho_{xy}) \end{equation}
Reminder : $\rho _{xy}=\dfrac {\text{cov}(x,y)}{\sigma _{x}\sigma _{y}}$
We can get for variances calculation of $u$ and $v$ :
\begin{equation} \text{cov}(u,v)=E((u-\bar{u})(v-\bar{v}))=E((x+y\dfrac{\sigma_{x}}{\sigma_{y}}-\bar{x}-
\bar{y}\dfrac{\sigma_{x}}{\sigma_{y}})(x-y\dfrac{\sigma_{x}}{\sigma_{y}}-\bar{x}+
\bar{y}\dfrac{\sigma_{x}}{\sigma_{y}})) \end{equation}
\begin{equation} =E((x-\bar{x})^{2}-\dfrac{\sigma_{x}}{\sigma_{y}}(y-\bar{y})(x-\bar{x})+(x-\bar{x})(y-\bar{y})
\dfrac{\sigma_{x}}{\sigma_{y}}-\big(\dfrac{\sigma_{x}}{\sigma_{y}}\big)^{2}(y-\bar{y})^{2}) \end{equation}
\begin{equation} =V(x)-\big(\dfrac{\sigma_{x}}{\sigma_{y}}\big)^{2}V(y)=0 \end{equation}
\begin{equation} \Rightarrow \rho_{uv}=0 \end{equation}
We see that $u$ and $v$ are not correlated : conclusion is that we can made 2 no-correlated variables from 2 correlated ones.
Both $x$ and $y$ random variables follow a Gauss's law (pdf$= f(x,y)$) with $\mu _{x}=0$ and $\mu _{y}=0$.
- Write its expression.
- Calculate Expectations $E(x)$ and $E(y)$.
- Calculate the distribution function for $u$ and $v$ (pdf$=h(u,v)$).
- What do you conclude about these two random variables $u$ and $v$ ?
General formula of 2D Gaussian distribution function :
\begin{align} f(x,y)&=\dfrac{1}{2\pi\sigma_x\sigma_y\sqrt{1-\rho^2}} \notag \\
&exp\left(-\dfrac{1}{2}\dfrac{1}{(1-\rho^2)}\left(\left(\dfrac{x-\mu_x}{\sigma_x}\right)^2+\left(\dfrac{y-\mu_y}{\sigma_y}\right)^2-\dfrac{2\rho\,(x-\mu_x)(y-\mu_y)}{\sigma_x\,\sigma_y}\right)\right) \end{align}
In our case, function is written with $\mu _{x}=\mu _{y}=0$ :
\begin{align} f(x,y)&=\dfrac{1}{2\pi\sigma_x\sigma_y\sqrt{1-\rho^2}}\notag \\
&exp\left(-\dfrac{1}{2}\dfrac{1}{(1-\rho^2)}\left(\left(\dfrac{x}{\sigma_x}\right)^2+\left(\dfrac{y}{\sigma_y}\right)^2-\dfrac{2\rho\,xy}{\sigma_x\,\sigma_y}\right)\right) \end{align}
Expectations :
\begin{equation} E(x)={\large\int}_{-\infty}^{+\infty}\,{\large\int}_{-\infty}^{+\infty}\,x\,f(x,y)\,dxdy=\mu_{x}=0 \end{equation}
\begin{equation} E(y)={\large\int}_{-\infty}^{+\infty}\,{\large\int}_{-\infty}^{+\infty}\,y\,f(x,y)\,dxdy=\mu_{y}=0 \end{equation}
Calculate now the $h(u,v)$ pdf : from the transfert theorem, there is probabilities conservation for ($x,y$) and ($u,v$) couples :
\begin{equation} h(u,v)\,dudv=f(x,y)\,dxdy \end{equation}
then :
\begin{equation} h(u,v)=f(\Phi(u,v))\,\vert\text{det}(J_{\Phi}(u,v))\vert \end{equation}
where $\Phi :(u,v)\,\longrightarrow \,(x,y)$ et $J_{\Phi }(u,v)$ is the Jacobian of $\Phi $ transformation.
\begin{equation} \text{det}(J_{\Phi}(u,v))=\left| \begin{array}{cc}
\dfrac{\partial x}{\partial u} & \dfrac{\partial x}{\partial v} \\
\dfrac{\partial y}{\partial u} & \dfrac{\partial y}{\partial v} \\
\end{array} \right | \,\,\, \text{with} \,\,\,
x=\dfrac{u+v}{2}\,\,\text{and}\,\,y=\dfrac{u-v}{2}\,\dfrac{\sigma_y}{\sigma_x} \end{equation}
\dfrac{1}{2} & \dfrac{1}{2} \\
\dfrac{1}{2}\,\dfrac{\sigma_y}{\sigma_x} &
-\dfrac{1}{2}\,\dfrac{\sigma_y}{\sigma_x} \\
\end{array} \right | \,=-\dfrac{1}{2}\dfrac{\sigma_y}{\sigma_x} \end{equation}
Finally, one deduces $h(u,v)$ pdf :
\begin{align} &h(u,v)=\dfrac{1}{2}\dfrac{\sigma_y}{\sigma_x}\dfrac{1}{2\pi\sigma_x\sigma_y\sqrt{1-\rho^2}} \notag \\
&exp\left(-\dfrac{1}{2(1-\rho^2)}\left(\left(\dfrac{u+v}{2}\right)^2\dfrac{1}{\sigma_x^2}-\dfrac{1}{2}\left(\dfrac{(u-v)}{2}\dfrac{\sigma_y}{\sigma_x}\right)^2\dfrac{1}{\sigma_y^2}
-2\rho\dfrac{(u+v)(u-v)}{4\sigma_x\sigma_y}\dfrac{\sigma_y}{\sigma_x}\right)\right) \end{align}
\begin{align} &h(u,v)=\dfrac{1}{4}\dfrac{1}{\pi\sigma_x^2\sqrt{1-\rho^2}} \notag \\
&exp\left(-\dfrac{1}{8(1-\rho^2)}\left(\dfrac{u^2+2uv+v^2}{\sigma_x^2}\right)-\dfrac{1}{8(1-\rho^2)}\left(\dfrac{u^2-2uv+v^2}{\sigma_x^2}\right)
+\dfrac{\rho}{4(1-\rho^2)}\left(\dfrac{u^2-v^2}{\sigma_x^2}\right)\right) \end{align}
&exp\left(-\dfrac{1}{4(1-\rho^2)}\dfrac{u^2(1-\rho)}{\sigma_x^2}-\dfrac{1}{4(1-\rho^2)}\dfrac{v^2(1+\rho)}{\sigma_x^2}\right) \end{align}
Since we get from Exercise 2.1 :
\begin{equation} \sigma_{x}^{2}(1+\rho_{xy})=\dfrac{\sigma_{u}^{2}}{2}\\
\sigma_{x}^{2}(1-\rho_{xy})=\dfrac{\sigma_{v}^{2}}{2} \end{equation}
\begin{equation} h(u,v)=\dfrac{1}{\sqrt{2\pi}\sigma_u}\,exp\left(-\dfrac{u^2}{2\sigma_u^2}\right)\dfrac{1}{\sqrt{2\pi}\sigma_v}\,exp\left(-\dfrac{v^2}{2\sigma_v^2}\right) \end{equation}
\begin{equation} h(u,v)=Gauss(u,0,\sigma_u)\,\,\,\cdot\,\,\,Gauss(v,0,\sigma_v) \end{equation}
We can see that $u$ and $v$ are independant because of pdf factorization $h(u,v)=\text {pdf}(u) \cdot \text {pdf}(v)$. If two variables $X$ and $Y$ are statiscally independent, then they are not correlated. The reverse is not necessarily true. Indeed, no-correlation implies independence only in particular cases and Gaussian random variable is one of them.
One considers a continous and positive random variable $t$, following an exponential law $\propto exp(-t/\tau )$. Give the expression of its pdf $f(t)$ and calculate its expectation $E(t)$ and variance $V(t)$. We have $N$ measures $t_{i}$ of this variable and choose the maximum likelihood method to determine $\tau $ parameter. Describe the principle of the maximum likelihood method and apply it to calculate this parameter $\tau$ and its variance. Now, we have 2 independants samples of $N_{1}$and $N_{2}$ measures so that total sample is consisted of $N=N_{1}+N_{2}$ measures. Calculate the estimation of $\tau $ from these N measures while making appear the contribution of $\tau _{1}$ and $\tau _{2}$.
Let $t$ be a random variable $>0$ following an exponential law ; pdf is given by :
\begin{equation} f(t)=\left\{
\begin{array}{lcc}
\dfrac{1}{\tau}\,exp\left(-\dfrac{t}{\tau}\right)&\text{if}&
t\,\geqslant\,0\\
0&\text{if}&t\,<\,0
Moreover :
\begin{equation} \int_{0}^{+\infty}f(t)dt=1 \,\,\,\text{and}\,\,\, f(t)\geqslant 0\,\, \forall t \end{equation}
To get expectation, we use integration rules :
\begin{equation} E(t)=\int_{0}^{+\infty}\dfrac{t}{\tau}\,exp(\dfrac{-t}{\tau})\,dt=
\big[\dfrac{t}{\tau}\,(-\tau)exp(-\dfrac{t}{\tau})\big]_{0}^{+\infty}
-\int_{0}^{+\infty}\dfrac{1}{\tau}\,(-\tau)\,exp(-\dfrac{t}{\tau})\,dt \end{equation}
\begin{equation} =\,0+\int_{0}^{+\infty}\,exp(-\dfrac{t}{\tau})\,dt=\tau \end{equation}
For variance :
\begin{equation} E(t^{2})=\int_{0}^{+\infty}\dfrac{t^{2}}{\tau}\,exp(\dfrac{-t}{\tau})\,dt=
\big[-t^{2}\,exp(-\dfrac{t}{\tau})\big]_{0}^{+\infty}
-\int_{0}^{+\infty}\dfrac{1}{\tau}\,\dfrac{2t(-\tau)}{\tau}\,exp(-\dfrac{t}{\tau})\,dt \end{equation}
\begin{equation} =\,0+2\,\int_{0}^{+\infty}\,t\,exp(-\dfrac{t}{\tau})\,dt=2\tau^{2} \end{equation}
$\Rightarrow \,\, V(t)=\tau ^{2}\,\,\Rightarrow \,\,\sigma =\tau $
We own $N$ measures $t_{i}, i=1,..,N$. Maximum likelihood allows to compute an estimation of $\tau $ parameter which is solution of the maximum or minimum for likelihood function $\mathcal{L}$ defined as :
\begin{equation} \mathcal{L}=\prod_{i=1}^{N}\,f(t_{i}) \end{equation}
where $f(t)$ is the pdf of $t$ variable. $\tau $ parameter is found minimizing $\mathcal{L}$ :
\begin{equation} \dfrac{\partial\,(-\ln \mathcal{L})}{\partial\tau}=0 \end{equation}
Calculate in our case the likelihood function $\mathcal{L}$ :
\begin{equation} \mathcal{L}=\prod_{i=1}^{N}\big(\dfrac{1}{\tau}\,e^{-\dfrac{t_{i}}{\tau}}\big) \end{equation}
We get :
\begin{equation} -\ln \mathcal{L}=-\sum_{i=1}^{N}\ln\big(\dfrac{1}{\tau}\,e^{-\dfrac{t_i}{\tau}}\big)=N\,\ln \tau +\sum_{i=1}^{N}\dfrac{t_i}{\tau} \end{equation}
So one has to solve :
\begin{equation} \dfrac{\partial(-\ln \mathcal{L})}{\partial \tau}=0=\dfrac{N}{\tau}-\dfrac{1}{\tau^2}\,\sum_{i=1}^{N}\,t_{i} \end{equation} \begin{equation} \Rightarrow\,\,\tau=\dfrac{1}{N}\sum_{i=1}^{N}t_{i} \end{equation}
Variance of this estimation will be given by the second derivate :
\begin{equation} V_\tau=-E\bigg(\dfrac{\partial^2(-\ln \mathcal{L})}{\partial\tau^2}\bigg)^{-1} \end{equation}
\begin{align} \Leftrightarrow\,\,V_\tau&=-E\bigg(\dfrac{\partial^2}{\partial
\tau^2}\big(\sum_{i=1}^N-\ln \tau-\sum_{i=1}^{N}\dfrac{t_{i}}{\tau}\big)\bigg)^{-1}
\notag \\
&\Leftrightarrow\,\,V_\tau=-E\bigg(\dfrac{\partial}{\partial
\tau}\big(-\dfrac{N}{\tau}+\dfrac{1}{\tau^2}\sum_{i=1}^{N}t_{i}\big)\bigg)^{-1}\notag
&=-E\big(\dfrac{N}{\tau^2}-2\dfrac{\sum_{i=1}^{N}\,t_{i}}{\tau^3}\big)^{-1}\notag \\
&=E\big(\dfrac{N}{\tau^2}\big)^{-1}\notag \\
&\Rightarrow\,\,\sigma_{\tau}=\dfrac{\tau}{\sqrt{N}} \end{align}
Now we have 2 independant samples $N_{1}$ et $N_{2}$. Each produces its estimation :
\begin{equation} \tau_1=\dfrac{1}{N_1}\sum_{i=1}^{N_1}t_i \,\,\,\,
\sigma_{\tau_1}=\dfrac{\tau_1}{\sqrt{N_1}} \,\,\,\,
\tau_2=\dfrac{1}{N_1}\sum_{j=1}^{N_2}t_j \,\,\,\,
\sigma_{\tau_2}=\dfrac{\tau_2}{\sqrt{N_2}} \,\,\,\, \end{equation}
Taking into account the total sample $N_{1}$+$N_{2}$ :
\begin{equation} \tau=\dfrac{1}{N_1+N_2}\bigg(\sum_{i=1}^{N_1}t_i+\sum_{j=1}^{N_2}t_j\bigg) \end{equation}
\begin{equation} \Rightarrow\tau=\dfrac{N_1\tau_1+N_2\tau_2}{N_1+N_2} \end{equation}
This result is the weighted average from these 2 samples.
Two independant experiments have measured ($\tau _{1},\sigma _{1}$) and ($\tau _{2},\sigma _{2}$) with $\sigma _{i}$ representing errors on measures.
(1) From these two measures, assuming errors are gaussian, we want to get the estimation of $\tau $ and its error (i.e with a combination of two measures).
- Which method do you use ?
- Calculate the estimation of $\tau $ and its error.
(2) From these two measures ($\tau _{1},\sigma _{1}$) and ($\tau _{2},\sigma _{2}$) :
Define the equivalent number $\tilde {N}_{1}$ and $\tilde {N}_{2}$ of each measure ; give the relations defining them. We use the maximum likelihood method to find the estimation of $\tau $ from the definition of these 2 equivalent numbers. Calculate this estimation of $\tau $ in this case (Making appear $\tau _{1},\sigma _{1}$ et $\tau _{2},\sigma _{2}$ in the expression). Compare it to previous expression in (1).
As the previous Exercise, we choose the maximum likelihood method with the pdf of 2 measures :
\begin{equation} f(\tau,\sigma)=\dfrac{1}{\sqrt{2\pi}\sigma}\,exp\big(-\dfrac{1}{2}\,\dfrac{(\tau-\hat{\tau})^2}{\sigma^2}\big) \end{equation}
One has to maximize the likelihood function :
\begin{equation} \mathcal{L}=\prod_{i=1}^{2}\dfrac{1}{\sqrt{2\pi}\sigma_i}\,exp\big(-\dfrac{1}{2}\,\dfrac{(\tau_i-\hat{\tau})^2}{\sigma_i^2}\big) \end{equation}
taking the following condition :
\begin{equation} \dfrac{\partial\,(-\ln \mathcal{L})}{\partial\hat{\tau}}=0 \end{equation}
\begin{equation} \Rightarrow\hat{\tau}=\dfrac{\tau_1/\sigma_1^2+\tau_2/\sigma_2^2}{1/\sigma_1^2+1/\sigma_2^2} \end{equation}
$\sigma _{\hat {\tau }}$is deducted from second derivate :
\begin{equation} \dfrac{1}{\sigma_{\hat{\tau}}^{2}}=\dfrac{1}{\sigma_1^2}+\dfrac{1}{\sigma_2^2} \end{equation}
For these both measures, equivalent number $\tilde {N}$ is defined by :
\begin{equation} \dfrac{\sigma_1}{\tau_1}=\dfrac{1}{\sqrt{\tilde{N_1}}}
\,\,\,\,\,\,\,\dfrac{\sigma_2}{\tau_2}=\dfrac{1}{\sqrt{\tilde{N_2}}} \end{equation}
This is the relative error of measure expressed by the statistical error due to the number of events.
If we apply the calculation of exercise 3.1 with the equivalent number $\tilde {N}$, we have :
\begin{equation} \hat{\tau}=\dfrac{\tilde{N_1}\tau_1+\tilde{N_2}\tau_2}{\tilde{N_1}+\tilde{N_2}} \end{equation}
\begin{equation} \hat{\tau}=\dfrac{\tau_1/(\sigma_1/\tau_1)^2+\tau_2/(\sigma_2/\tau_2)^2}{1/(\sigma_1/\tau_1)^2+1/(\sigma_2/\tau_2)^2} \end{equation}
In conclusion :
case (1) : weighted by the square of inverse error.
case (2) : weighted by the square of relative error.
We are looking to compute the integral of a $f(x,y)$ function by Monte-Carlo method :
\begin{equation} {\large\int}_{x=0}^{x=1}\,{\large\int}_{y=0}^{y=x}\,f(x,y)\,dx\,dy \end{equation}
For this, we own a random number and uniform generator between $[0,1]$.
How will you proceed (Make a schema of the integration area on $(x,y)$ plane) ?
Reminder - Monte-Carlo method :
From the transfert theorem, we get the expression of expectation for a function $g$ representing the random variable $X$ as :
\begin{equation} G = E(g(X))=\int_a^b g(x)\,f_X(x) \, \mbox{d}x \end{equation}
where $f_{X}$ is a pdf on $[a,b]$ interval. One usually takes a uniform distribution on $[a,b]$ :
\begin{equation} f_X(x) = \frac{1}{b-a} \end{equation}
Principle is to generate a sample $(x_{1},x_{2},...,x_{N})$ with law of $X$ (so we calculate an estimator called "the estimator of Monte-Carlo" from this sample).
Law of large numbers assumes to build this estimator from the empirical average :
\begin{equation} \tilde{g_N} = \frac{1}{N} \sum_{i=1}^{N} g(x_i), \end{equation}
which is, by the way, an unbiased estimator of expectation. This is the estimator of Monte-Carlo. We can clearly see that, by replacing sample by a set of values located on an integral support, and the function to integrate, we can build an approximation of its value, statistically made.
Thanks to the "uniform generator", Monte-Carlo method gives a numerical value of the integral noticed $I$. Area of integration is represented on the figure below :
Representation of integration area
So we can distinguish two cases :
case (1) - We do 2 random sampling, one for $x$ and the other for $y$ :
\begin{equation} \textrm{Random sampling}=\left\{
x_{i} & \in & [0,1] \\
y_{i} & \in & [0,1]
If $x_{i}>y_{i}$, then we increment $I$ in the following way : $I=I+f(x_{i},y_{i})$, else we redo a random sampling. We have done 2 random sampling and the average to lose one is 1/2.
case (2) - We do 2 random sampling like previous :
x_{i} &\in &[0,1] \\
y_{i} &\in & [0,1]
Then, as a function of results, we increment $I$ in the following way :
\begin{equation} \textrm{Incrementation}\left\{
if &x_{i} > y_{i} & I=I+f(x_{i},y_{i}) \\
if &x_{i} < y_{i} & I=I+f(y_{i},x_{i}) \\
The advantage here is that we use all the random sampling values unlike case (1).
Finally, one has to multiply the $I$ quantity by $(b-a)$ interval and divide by the size of random sampling $N$, so we get the numerical value of integral.
A point source emits isotropically and covers an angle of $\theta _{0}$. A disk detector is positioned perpendicular to this source. So we have a cylindrical symetry with 2 angles : $\varphi $ between [0,2$\pi $] and $\theta $ as $\text {cos}(\theta _{0})<\text {cos}(\theta )<1$.
Calculate the pdf of $(\varphi ,\text {cos}\,\theta )$.
Having a generator of random values on [0,1], how will you random sample in acceptance disk a couple $(\varphi ,\text {cos}\,\theta )$ ?
With this detector, we want to do counting during equal interval time $\Delta t$. Express the distribution function for the number of recorded hits.
Since point source emits isotropically, random variables $\varphi $ and $\text {cos}\,\theta $ follow a uniform law, respectively on [0,2$\pi $] et [$\text {cos}\,\theta _{0},1$].
We have for the $\varphi $ pdf :
\begin{equation} \text{pdf}(\varphi)=\dfrac{1}{2\pi} \end{equation}
For the $\text {cos}\,\theta $ random variable, we can write :
\begin{equation} \text{pdf}(\text{cos}\,\theta)\,d\,\text{cos}(\theta)=K\,d\,\text{cos}(\theta) \,\,\,\,\text{with $K$=constant} \end{equation}
{\large\int}_{\text{cos}\,\theta_{0}}^{1}\,\text{pdf}(\text{cos}\,\theta)\,d\,\text{cos}(\theta)=1=K\,{\large\int}_{\text{cos}\,\theta_{0}}^{1}\,d\,\text{cos}(\theta)\\
=K\,(1-\text{cos}\,\theta_{0}) \end{equation}
K=\dfrac{1}{\,(1-\text{cos}\,\theta_{0})} \end{equation}
Given $\varphi $ and $\text {cos}\,\theta $ are independant, we conclude :
\begin{equation} \text{pdf}(\varphi,\text{cos}\,\theta)=\text{pdf}(\varphi)\,\text{pdf}(\text{cos}\,\theta)=\dfrac{1}{\,(1-\text{cos}\,\theta_{0})}\,\dfrac{1}{2\pi} \end{equation}
With a random generator between [0,1], one makes the following correspondences :
\begin{equation} \varphi=2\,\pi\,u \,\,\,\text{with}\,\,u\,\,\text{uniform on}\,\,[0,1] \end{equation}
For $\text {cos}\,\theta $, we use the relation, taking random variable $v$ as uniform on [0,1] :
\begin{equation} {\large\int}_{\text{cos}\,\theta_{0}}^{\text{cos}\,\theta}\,\dfrac{d\,\text{cos}\,\theta}{1-\text{cos}\,\theta_{0}}=\,{\large\int}_{0}^{v}\,d\,v \end{equation}
\begin{equation} \Rightarrow\text{cos}\,\theta-\text{cos}\,\theta_{0}=v(1-\text{cos}\,\theta_{0})\\
\Rightarrow\text{cos}\,\theta=v+(1-v)\text{cos}\,\theta_{0} \end{equation}
The number of hits recorded during interval $\Delta t$ will follow a Poisson law.
we have a set of measures $y_{i}\,\,i=1,...,n$ depending from coordinates $x_{i}$ and whose theorical model is linear $y=ax+b$. Thanks to these data, we look for determining values of $a$ and $b$.
Measures $y_{i}$ have an error $\sigma _{i}$. Firstly, coordinates $x_{i}$ are considered being without error.
- Express the $\chi ^{2}$ you have to use.
- Express the 2 equations from which you can deduce estimations for $a$ and $b$.
With $n$ independant measures $y_{i}\,i=1,...,n$ and $n$ coordinates $x_{i}$ in a linear model $y=ax+b$, with $\sigma _{i}$ errors on $y_{i}$ and no errors on $x_{i}$, one can write the $\chi ^{2}$ as :
\begin{equation} \chi^{2}=\sum_{i=1}^{n}\,\dfrac{(y_{i}-(a\,x_{i}+b))^{2}}{\sigma_{i}^{2}} \end{equation}
One has to minimize the $\chi ^{2}$ to compute $a$ and $b$ values : one gets 2 linear equations with 2 unknowns ($a$ and $b$) :
\begin{equation} \dfrac{\partial\chi^{2}}{\partial a}=\sum_{i=1}^{n}\,\dfrac{(y_{i}-a\,x_{i}-b)\,x_{i}}{\sigma_{i}^{2}}=0 \end{equation}
\begin{equation} \dfrac{\partial\chi^{2}}{\partial b}=\sum_{i=1}^{n}\,\dfrac{(y_{i}-a\,x_{i}-b)}{\sigma_{i}^{2}}=0 \end{equation}
Now, $x_{i}$ coordinates have $\delta _{i}$ errors.
Express the $\chi ^{2}$ which has to be used in this case.
Write the 2 equations from which you calculate $a$ and $b$ estimation. What's the difference with previous case ?
$\chi ^{2}$formula must be modified because we take into account of $\delta _{i}$ errors on coordinates $x_{i}$. Indeed, variance of $(y_{i}-a\,x_{i}-b)$ is not only equal to $V(y_{i})=\sigma _{i}^{2}$ :
\begin{equation} V(y_{i}-a\,x_{i}-b)=V(y_{i})+a^{2}\,V(x_{i})=\sigma_{i}^{2}+a^{2}\,\delta_{i}^{2} \end{equation}
The denominator of $\chi ^{2}$ depends on $a$ parameter :
\begin{equation} \chi^{2}=\sum_{i=1}^{n}\,\dfrac{(y_{i}-(a\,x_{i}+b))^{2}}{\sigma_{i}^{2}+a^{2}\,\delta_{i}^{2}} \end{equation}
Minimization of $\chi ^{2}$ is got by the 2 following equations :
\begin{equation} \dfrac{\partial\chi^{2}}{\partial
a}=0\,\,\,\text{and}\,\,\,\dfrac{\partial\chi^{2}}{\partial b}=0 \end{equation}
But we notice that all these equations are not linear since the second one which minimizes the $\chi ^{2}$ with $b$ depends on powered $a$ : we have not analytical solution in this case.
$\chi ^{2}$method can give estimations on parameters, $a\pm \sigma _{a}$ and $b\pm \sigma _{b}$, so a minimum value $\chi _{min}^{2}$.
We want to draw in $a,b$ plane the contour related to a given confidence level.
Express the distribution function that you use.
In this case, by fixing the confidence level, express the variation of $\chi ^{2}$ compared to $\chi _{min}^{2}$, so we could write : $\chi ^{2}(CL)=\chi _{min}^{2}+\Delta\chi_{CL}^{2}$
What value do you get with $CL=0.68$ ?
$\chi ^{2}(CL)$term only contains second derivatives of $\chi ^{2}$ at lowest level :
\begin{equation} \dfrac{\partial^{2}\chi^{2}}{\partial
a^{2}}\,\,\,\text{,}\,\,\dfrac{\partial^{2}\chi^{2}}{\partial
b^{2}}\,\,\text{et}\,\,\dfrac{\partial^{2}\chi^{2}}{\partial a\partial b} \end{equation}
Concerning $\Delta \chi ^{2}$, distribution function is a $\chi ^{2}$ law with 2 freedom degrees ; pdf is written as :
\begin{equation} f(\Delta\chi^{2})=\dfrac{1}{2}\,e^{-\dfrac{\Delta\chi^{2}}{2}} \end{equation}
So for a fixed confidence level $CL$, we have :
\begin{equation} 1-CL={\large\int}_{\Delta\chi^{2}_{CL}}^{+\infty}\,\dfrac{1}{2}\,e^{-\dfrac{\Delta\chi^{2}}{2}}\,d\,\chi^{2}=e^{-\dfrac{\Delta\chi_{CL}^{2}}{2}} \end{equation}
also : $\Delta \chi ^{2}_{CL}=-2\ln(1-CL)$.
For $CL=0.68$, we have : $\Delta \chi ^{2}_{CL}=2.28$
ps : join like me the Cosmology@Home project whose aim is to refine the model that best describes our Universe
Home | Astronomy | Sciences | Philosophy | Coding | Cv
- dournac.org © 2003 by fab -
|
CommonCrawl
|
A Delicate Moment in Turkey's Economic Transition: Can Turkey Survive Mounting Economic Problems without the IMF's Bailout Package?
Taskinsoy, John
By the mid-19th century, after nearly 600 years of power, the Ottoman Empire's financial strength weakened considerably exposing the Empire to the risk of losing its territories in the Balkans, Middle East, and Africa. The Ottoman Empire's involvement in the costly Crimean War (1853-56) was a fatal mistake which marked the beginning of the Empire's everlasting addiction to foreign borrowing. The creation of the Ottoman Public Debt Administration by Sultan Abdülhamid II in 1881 (similar to the IMF) turned the Ottoman Empire into Britain's semi-colony and was labeled as ill-man. Mustafa Kemal â€" a brilliant man, military commander, politician, strategist, and genius â€" had initiated the Turkish national resistance movement in the aftermath of World War I to expel occupying armies. Eventually, Mustafa Kemal created an independent Turkish Republic in November 1922 from the ashes of an Empire that existed 623 years. But not without cost, the Paris Conference of 1925 forced Turkey as the new Republic to agree to pay the debt of the Ottoman Empire (last remaining payment was made in 1954). Now the question is to be with the IMF or not to be. Following the Justice and Development Party's win in the 2007 general elections (47% of parliamentary seats), Prime Minister Recep Tayip ErdoÄŸan said "No IMF in Turkey's future". Although Turkey made its final loan payment to the IMF in May 2008, many contend that Turkey's divorce from the IMF can hardly qualify as a true graduation since the country is on the brink of an economic crisis (financial meltdown), attributable to excessive private and household debt, massive dollarization, a failed coup attempt by a fraction of the Turkish military (July 15, 2016), fast rising unemployment (over 14%), fast devaluation of lira due to repeated speculative attacks and the subsequent cascade of corporate defaults. At the backdrop of multifaceted instability, Turkey's current gloomy financial situation (high inflation and chronic deficits of current account, budget, and trade) is in desperate need of foreign capital inflows especially when Turkey's options to finance deficits through external barrowing have become substantially limited. Elongated addictions, whether to the U.S. dollar or to the IMF, are not easy to overcome by a politically oriented decision provided that Turkey has been the IMF's longest devoted member (since 1947). Turkey is going through very tough times; with low foreign reserves, Turkish economy has become vulnerable to speculative attacks stemming from domestic and external sources.
A Pyramid Scheme Model Based on "Consumption Rebate" Frauds
Yong Shi,Bo Li,Wen Long
There are various types of pyramid schemes which have inflicted or are inflicting losses on many people in the world. We propose a pyramid scheme model which has the principal characters of many pyramid schemes appeared in recent years: promising high returns, rewarding the participants recruiting the next generation of participants, and the organizer will take all the money away when he finds the money from the new participants is not enough to pay the previous participants interest and rewards. We assume the pyramid scheme carries on in the tree network, ER random network, SW small-world network or BA scale-free network respectively, then give the analytical results of how many generations the pyramid scheme can last in these cases. We also use our model to analyse a pyramid scheme in the real world and we find the connections between participants in the pyramid scheme may constitute a SW small-world network.
A Triptych Approach for Reverse Stress Testing of Complex Portfolios
Pascal Traccucci,Luc Dumontier,Guillaume Garchery,Benjamin Jacot
The quest for diversification has led to an increasing number of complex funds with a high number of strategies and non-linear payoffs. The new generation of Alternative Risk Premia (ARP) funds are an example that has been very popular in recent years. For complex funds like these, a Reverse Stress Test (RST) is regarded by the industry and regulators as a better forward-looking risk measure than a Value-at-Risk (VaR). We present an Extended RST (ERST) triptych approach with three variables: level of plausibility, level of loss and scenario. In our approach, any two of these variables can be derived by providing the third as the input. We advocate and demonstrate that ERST is a powerful tool for both simple linear and complex portfolios and for both risk management as well as day-to-day portfolio management decisions. An updated new version of the Levenberg - Marquardt optimization algorithm is introduced to derive ERST in certain complex cases.
A simple approach to dual representations of systemic risk measures
Maria Arduca,Pablo Koch-Medina,Cosimo Munari
We describe a general approach to obtain dual representations for systemic risk measures of the "allocate first, then aggregate"-type, which have recently received significant attention in the literature. Our method is based on the possibility to express this type of multivariate risk measures as special cases of risk measures with multiple eligible assets. This allows us to apply standard Fenchel-Moreau techniques to tackle duality also for systemic risk measures. The same approach can be also successfully employed to obtain an elementary proof of the dual representation of "first aggregate, then allocate"-type systemic risk measures. As a final application, we apply our results to derive a simple proof of the dual representation of univariate utility-based risk measures.
Blocktrades in OTC Options Markets: Price Impact and Liquidity Effects
Kiesel, Ruediger,Pietrobono, Alessio
While Over-The-Counter (OTC) markets have widely been studied for equity and interest-rate products, there is only scarce literature on OTC derivative equity markets. In this paper we use a unique Eurex data set to study differences for OTC and regular (meaning exchange traded) derivatives. We consider major German companies in the years before and during the financial crisis 2008. We find significant differences in prices and provided liquidity.
Cashless Society in Thailand
Kraiwanit, Tanpat,Panpon, Panya,Thimthong, Sunattha
The technology development has had both positive and negative disruptive effects on contemporary lifestyle. We therefore investigated the factors influencing access to the cashless society and consumers' reasons for deciding not to use electronic payment, including lack of confidence in the security and/or confidentiality of personal information. Data were collected from five regions of Thailand using a questionnaire. Quota sampling was used to collect data from 200 respondents in each area (N=1,000). There were 66 respondents who did not use electronic payment, but the remaining 934 cases were used to test the hypotheses through multivariate analysis of variance (MANOVA). The results showed that age, education, income and the use of the internet were associated with access to the cashless society significantly. This study found that the knowledge of electronic payments has led to the adoption of new forms of financial services that are safe and easy to use. We conclude that digital business including finance and banking sector that may disrupt and benefit for cashless society must be participated in promoting people such as the elderlies, who are often slow to respond to technological adoption about electronic payment methods to access the cashless society.
Decomposing Value Globally
Atilgan, Yigit,Demirtas, K. Ozgur,Gunaydin, A. Doruk,KIRLI, Imra
This paper utilizes an international context and revisits the findings which argue that the positive relation between book-to-market ratio and future equity returns is driven by historical changes in firm size in the US. After confirming these results in the US setting both in the original and a more recent sample period, we find that they do not hold in regions outside the US. In the international sample, book-to-market ratio has a significantly positive relation with future equity returns even after changes in firm size are controlled for in regression analyses. This positive relation is again visible when the orthogonal component of book-to-market ratio (which is independent from changes in firm size) is used as a sorting variable in portfolio analyses.
Deposit Insurance and Banks' Deposit Rates: Evidence from the 2009 EU Policy
Gatti, Matteo,Oliviero, Tommaso
In early 2009 the EU increased the minimum deposit insurance limit from €20,000 to €100,000 per bank account. Italy was the only country with a limit already set to €103,291 from 1994. To evaluate the impact of the new directive we run a diff-in-diff analysis and compare the bank-size weighted average deposit interest rates of the Eurozone countries with the Italian ones. We find that the increase of deposit insurance leads to a decrease of deposit rates in European countries relative to Italy between 0.3 and 0.7 percentage points. The drop in deposit rates is confirmed by a diff-in-diff analysis run at bank level after implementing a propensity score matching of Italian banks with European ones. We finally show that this effect mainly come from riskier banks confirming that deposit insurance negatively affects deposit rates by reducing the depositors' required risk-premium.
Executive Gender and Prospect-Value Bias: Evidence from Insider Trading
Sehrish, Saba,Ding, David K.,Visaltanachoti, Nuttawat
This study shows that prospect value influences insider-trading decisions, and the impact is stronger among female executives' trades. Insiders who buy (sell) when their company's prospect value is above (below) other firms' prospect values lose 34 (12) basis points over the next month. Female insider trades, as compared with trades by their male counterparts, are affected more by prospect-value bias, and they suffer significantly higher resultant losses. While the findings contradict the overconfidence hypothesis that predicts poor trading decisions by male insiders, the results are consistent with the male insiders' superior information access hypothesis, suggesting that behavioral biases diminish with knowledge.
Exponential utility with non-negative consumption
Roman Muraviev
We offer mathematical tractability and new insights for a framework of exponential utility with non-negative consumption, a constraint often omitted in the literature giving rise to economically unviable solutions. Specifically, using the Kuhn-Tucker theorem and the notion of aggregate state price density (Malamud and Trubowitz (2007)), we provide a solution to this problem in the setting of both complete and incomplete markets (with random endowments). Then, we exploit this result to provide an explicit characterization of complete market heterogeneous equilibria. Furthermore, we construct concrete examples of models admitting multiple (including infinitely many) equilibria. By using Cramer's large deviation theorem, we study the asymptotics of equilibrium zero coupon bonds. Lastly, we conduct a study of the precautionary savings motive in incomplete markets.
Heads We Both Win, Tails Only You Lose: the Effect of Limited Liability On Risk-Taking in Financial Decision Making
Ahrens, Steffen,Ciril, Bosch-Rosa,
One of the reasons for the recent crisis is that financial institutions took \"too much risk\" (Brunnermeier, 2009; Taylor et al., 2010). Why were these institutions taking so much risk is an open question. A recent strand in the literature points towards the \"cognitive dissonance\" of investors who, because of the limited liability of their investments, had a distorted view of riskiness (e.g., Barberis (2013); Benabou (2015)). In a series of laboratory experiments we show how limited liability does not affect the beliefs of investors, but does increase their willing exposure to risk. This results points to a simple explanation for the over-investment of banks and hedge-funds: When incentives are not aligned, investors take advantage of the moral hazard opportunities.
How Does a Firm Adapt in a Changing World? The Case of Prosper Marketplace
Li, Xinlong,Ching, Andrew T.
In a rapidly changing world, older data is not as informative as the most recent data. This is known as a concept drift problem in statistics and machine learning. How does a firm adapt in such an environment? To address this research question, we propose a generalized revealed preference approach. We argue that by observing a firm's choices, we can recover the way the firm uses the past data to make business decisions. We apply this approach to study how Prosper Marketplace, an online P2P lending platform, adapts in order to address the concept drift problem. More specifically, we develop a two-sided market model, where Prosper uses the past data and machine learning techniques to assess borrowers' and lenders' preferences, borrowers' risks, and then set interest rate for their loans to maximize his expected profits. By observing his interest rate choices over time and using this structural model, we infer that Prosper assigns different weights to past data points depending on how close the economic environments that generate the data are to the current environment. In the counterfactual, we demonstrate that Prosper may not be using the past data optimally, and it could improve its revenue by changing the way it uses data.
Infinitesimal perturbation analysis for risk measures based on the Smith max-stable random field
Erwan Koch,Christian Y. Robert
When using risk or dependence measures based on a given underlying model, it is essential to be able to quantify the sensitivity or robustness of these measures with respect to the model parameters. In this paper, we consider an underlying model which is popular in spatial extremes, the Smith max-stable random field. We study the sensitivity properties of risk or dependence measures based on the values of this field at a finite number of locations. Max-stable fields play a key role, e.g., in the modelling of natural disasters. As their multivariate density is generally not available for more than three locations, the likelihood ratio method cannot be used to estimate the derivatives of the risk measures with respect to the model parameters. Thus, we focus on a pathwise method, the infinitesimal perturbation analysis (IPA). We provide a convenient and tractable sufficient condition for performing IPA, which is intricate to obtain because of the very structure of max-stable fields involving pointwise maxima over an infinite number of random functions. IPA enables the consistent estimation of the considered measures' derivatives with respect to the parameters characterizing the spatial dependence. We carry out a simulation study which shows that the approach performs well in various configurations.
Lead-lag Relationships in Foreign Exchange Markets
Lasko Basnarkov,Viktor Stojkoski,Zoran Utkovski,Ljupco Kocarev
Lead-lag relationships among assets represent a useful tool for analyzing high frequency financial data. However, research on these relationships predominantly focuses on correlation analyses for the dynamics of stock prices, spots and futures on market indexes, whereas foreign exchange data have been less explored. To provide a valuable insight on the nature of the lead-lag relationships in foreign exchange markets here we perform a detailed study for the one-minute log returns on exchange rates through three different approaches: i) lagged correlations, ii) lagged partial correlations and iii) Granger causality. In all studies, we find that even though for most pairs of exchange rates lagged effects are absent, there are many pairs which pass statistical significance tests. Out of the statistically significant relationships, we construct directed networks and investigate the influence of individual exchange rates through the PageRank algorithm. The algorithm, in general, ranks stock market indexes quoted in their respective currencies, as most influential. Altogether, these findings suggest that all market information does not spread instantaneously, contrary to the claims of the efficient market hypothesis.
Limit order books, diffusion approximations and reflected SPDEs: from microscopic to macroscopic models
Ben Hambly,Jasdeep Kalsi,James Newbury
Motivated by a zero-intelligence approach, the aim of this paper is to connect the microscopic (discrete price and volume), mesoscopic (discrete price and continuous volume) and macroscopic (continuous price and volume) frameworks for the modelling of limit order books, with a view to providing a natural probabilistic description of their behaviour in a high to ultra high-frequency setting. Starting with a microscopic framework, we first examine the limiting behaviour of the order book process when order arrival and cancellation rates are sent to infinity and when volumes are considered to be of infinitesimal size. We then consider the transition between this mesoscopic model and a macroscopic model for the limit order book, obtained by letting the tick size tend to zero. The macroscopic limit can then be described using reflected SPDEs which typically arise in stochastic interface models. We then use financial data to discuss a possible calibration procedure for the model and illustrate numerically how it can reproduce observed behaviour of prices. This could then be used as a market simulator for short-term price prediction or for testing optimal execution strategies.
Multi-Agent Deep Reinforcement Learning for Liquidation Strategy Analysis
Wenhang Bao,Xiao-yang Liu
Liquidation is the process of selling a large number of shares of one stock sequentially within a given time frame, taking into consideration the costs arising from market impact and a trader's risk aversion. The main challenge in optimizing liquidation is to find an appropriate modeling system that can incorporate the complexities of the stock market and generate practical trading strategies. In this paper, we propose to use multi-agent deep reinforcement learning model, which better captures high-level complexities comparing to various machine learning methods, such that agents can learn how to make the best selling decisions. First, we theoretically analyze the Almgren and Chriss model and extend its fundamental mechanism so it can be used as the multi-agent trading environment. Our work builds the foundation for future multi-agent environment trading analysis. Secondly, we analyze the cooperative and competitive behaviours between agents by adjusting the reward functions for each agent, which overcomes the limitation of single-agent reinforcement learning algorithms. Finally, we simulate trading and develop an optimal trading strategy with practical constraints by using a reinforcement learning method, which shows the capabilities of reinforcement learning methods in solving realistic liquidation problems.
On the monotone stability approach to BSDEs with jumps: Extensions, concrete criteria and examples
Dirk Becherer,Martin Büttner,Klebert Kentia
We show a concise extension of the monotone stability approach to backward stochastic differential equations (BSDEs) that are jointly driven by a Brownian motion and a random measure for jumps, which could be of infinite activity with a non-deterministic and time inhomogeneous compensator. The BSDE generator function can be non convex and needs not to satisfy global Lipschitz conditions in the jump integrand. We contribute concrete criteria, that are easy to verify, for results on existence and uniqueness of bounded solutions to BSDEs with jumps, and on comparison and a priori $L^{\infty}$-bounds. Several examples and counter examples are discussed to shed light on the scope and applicability of different assumptions, and we provide an overview of major applications in finance and optimal control.
Optimal Liquidation under Partial Information with Price Impact
Katia Colaneri,Zehra Eksi,Rüdiger Frey,Michaela Szölgyenyi
We study the optimal liquidation problem in a market model where the bid price follows a geometric pure jump process whose local characteristics are driven by an unobservable finite-state Markov chain and by the liquidation rate. This model is consistent with stylized facts of high frequency data such as the discrete nature of tick data and the clustering in the order flow. We include both temporary and permanent effects into our analysis. We use stochastic filtering to reduce the optimal liquidation problem to an equivalent optimization problem under complete information. This leads to a stochastic control problem for piecewise deterministic Markov processes (PDMPs). We carry out a detailed mathematical analysis of this problem. In particular, we derive the optimality equation for the value function, we characterize the value function as continuous viscosity solution of the associated dynamic programming equation, and we prove a novel comparison result. The paper concludes with numerical results illustrating the impact of partial information and price impact on the value function and on the optimal liquidation rate.
Predictability concentrates in bad times. And so does disagreement
de Oliveira Souza, Thiago
Within a standard risk-based asset pricing framework with rational expectations, realized returns have two components: Predictable risk premiums and unpredictable shocks. In bad times, the price of risk increases. Hence, the predictable fraction of returns – and predictability – increases. "Disagreement" (dispersion in analyst forecasts) also intensifies in bad times if (i) analysts report (close to) risk-neutral expectations weighted by state prices, which become more volatile, or (ii) dividend volatility changes with the price of risk – for example, because consumption volatility changes. In both cases, individual analysts produce unbiased forecasts based on partial information.
Price Dynamics and Trader Overconfidence
Ahrens, Steffen,Ciril, Bosch-Rosa,,Roulund, Rasmus
Overconfidence is one of the most important biases in financial markets and commonly associated with excessive trading and asset market bubbles. So far, most of the finance literature takes overconfidence as a given, \"static\" personality trait. In this paper we introduce a novel experimental design which allows us to track different measures of overconfidence during an asset market bubble. The results show that overconfidence co-moves with asset prices and points towards a feedback loop in which overconfidence adds fuel to the flame of existing bubbles.
Return on Equity (ROE), Return on Capital Employed (ROCE), Economic Profit (EP) and Economic Value Added (EVA) (Presentation Slides)
Casielles, Jorge
This paper includes the following topics:1. The Return On Equity (ROE): What is ROE? How do we know if the ROE of a company is good or not? What should be the relationship between ROE and cost of equity(Ke)? What are the ROEs of the different industries?2. The Return On Capital Employed (ROCE): What is ROCE? The ROCEs of different industries3. The relationship between ROE and ROCE and the effect of debt. When the use of debt improves ROE?4. The ROCE depends on operating margin and turnover of capital employed. Therefore, how the ROCE can be improved?5. The ROE depends on ROCE and debt level. Therefore, how the ROE can be improved?6. The ROE depends on net profit margin, turnover of CE and financial leverage. When financial leverage improves the ROE?7. The ROE vs. Ke and ROCE vs. WACC. What relationship should be between ROE and Ke? What is the WACC? What relationship should be between ROCE and WACC? The ROE vs Ke and ROCE vs WACC of the different industries8. The Economic Profit (EP) and the Economic Value Added (EVA). What is the EP and what does it indicate? What is EVA an what does it indicate?9. How to estimate the cost of equity (Ke): the Capital Asset Pricing Model (CAPM). What is the CAPM and how to estimate Ke using it. What are the problems of the CAPM?10. Practice questionsMultiple exercises with real companies like Apple, Amazon, Microsoft, Google, Bayer, Mazda and Orange are included.
Silicon Valley Venture Capitalist Confidence Indexâ"¢ Q1 2019
Cannice, Mark
The Silicon Valley Venture Capitalist Confidence Index™ (Bloomberg ticker symbol: SVVCCI) is based on a recurring quarterly survey (since Q1 2004) of Silicon Valley/San Francisco Bay Area venture capitalists. The Index measures and reports the opinions of professional venture capitalists on their estimations of the high-growth venture entrepreneurial environment in the San Francisco Bay Area over the next 6 - 18 months. The Silicon Valley Venture Capitalist Confidence Index™ for the first quarter of 2019, based on a March 2019 survey of 26 San Francisco Bay Area venture capitalists, registered 3.63 on a 5-point scale (with 5 indicating high confidence and 1 indicating low confidence). This quarter's Index measurement climbed significantly from the previous quarter's Index reading of 3.20 which was the lowest level since Q1 2009.
The Coevolution of Banks and Corporate Securities Markets: The Financing of Belgium's Industrial Take-Off in the 1830s
Stefano Ugolini
Recent developments in the literature on financial architecture suggest that banks and markets not only coexist, but also coevolve in ways that are non-neutral from the viewpoint of optimality. This article aims to analyse the concrete mechanisms of this coevolution by focusing on a very relevant case study: Belgium (the first Continental country to industrialize) at the time of the very first emergence of a modern financial system (the 1830s). The article shows that intermediaries played a crucial role in developing secondary securities markets (as banks acted as securitizers), but market conditions also had a strong feedback on banks' balance sheets and activities (as banks also acted as market-makers for the securities they had issued). The findings suggest that not only structural, but also cyclical factors can be important determinants of changes in financial architecture.
The Econometrics of Redenomination Risk
Cherubini, Umberto
We use the quotes of European sovereign CDS with the 2014 credit event definitions, including redenomination, and those with the 2003 definition, that excludes redenomination for the G-7 countries, to simultaneously estimate the implied redenomination risk and the dependence between redenomination and default risk. With positive dependence the so called ISDA basis, that is the difference between the two CDS spreads, systematically underestimates redenomination risk. The estimation, carried out with the MLE on transformed data technique for Italy, France and Germany, shows evidence of statistically significant, albeit moderate, dependence berween default and redenomination risk, ranging around 10%. Even this low level of correlation is sufficient to provide a material bias in the simple CDS spread difference used in the market. For Italy, the bias reaches 20 bp in the end of the sample. In order to apply the measure backward before 2014, to cover the whole crisis period, we propose a new measure called relative asset swap basis measure (R-ASW basis), that shows a remarkably high degree of correlation, at least for Italy, with the ISDA basis in the period in which both are available. This measure, applied across the crisis period, shows that redenomination risk levels in the end of the sample, 2018-2019, are comparable with those reached at the peak of the Italian crisis, in 2011-2012. The difference is that in 2018-2019 the level of redenomination risk, for the first time and for both Italy and France, is about the same as default risk, that instead was much higher in 2011-2012. A cross border analysis shows that redenomination risk of France plays a key role for the survival of the Euro. In fact, redenomination risk of France is both associated with that of Italy and Germany, that are instead independent. A measure of "end of the Euro" probability, based on a Marshall-Olkin model of the simultaneous redenomination of the three countries, shows that redenomination of France is largely systemic, and is associated to the end of the Euro, while redenomination of Italy is largely idiosyncratic, and so it is mostly associated to country-specific shocks.
The Syntax of the Accounting Language: A First Step
Frederico Botafogo
We review and interpret two basic propositions published by Ellerman (2014). The propositions address the algebraic structure of T accounts and double entry bookkeeping (DEB). The paper builds on this previous contribution with the view of reconciling the two, apparently dichotomous, perspectives of accounting measurement: the one that focuses preferably on the stock of wealth and to the one that focuses preferably on the flow of income. The paper claims that T-accounts and DEB have an underlying algebraic structure suitable for approaching measurement from either or both perspectives. Accountants preferences for stocks or flows can be framed in ways which are mutually consistent. The paper is a first step in addressing this consistency issue. It avoids the difficult mathematics of abstract algebra by applying the concept of syntax to accounting numbers such that the accounting procedure qualifies as a formal language with which accountants convey meaning.
Under Pressure: Listing Status and Disinvestment in Japan
French, Joseph J.,Fujitani, Ryosuke,Yasuda, Yukihiro
We provide the first large sample comparisons of disinvestment by listed and unlisted firms. This study focuses on Japanese firms from 2001-2017, as this was a period of economic stagnation and financial reforms encouraging companies to restructure. We show that stock market listing is positively related to disinvestment. Listed firms disinvest 1.9% more than similar unlisted firms. Disinvestment activities of listed companies are also more sensitive to investment opportunities. Additionally, firms that disinvest show improvements in ROA and increases in future investment. Finally, we find that foreign (financial institution) ownership is positively (negatively) related to disinvestment.
|
CommonCrawl
|
Impedance Spectroscopy as a Novel Approach to Probe the Phase Transition and Microstructures Existing in CS:PEO Based Blend Electrolytes
Shujahadeen B. Aziz1,2,
M. G. Faraj ORCID: orcid.org/0000-0001-9921-02083 &
Omed Gh. Abdullah ORCID: orcid.org/0000-0002-1914-152X1,2
Scientific Reports volume 8, Article number: 14308 (2018) Cite this article
Nanoscience and technology
In this work the role of phase transition of PEO from crystalline to amorphous phases on DC conductivity enhancement in chitosan-based polymer electrolyte was discussed. Silver ion-conducting polymer electrolytes based on chitosan (CS) incorporated with silver nitrate (AgNt) is prepared via solution cast technique. Various amounts of polyethylene oxide (PEO) are added to the CS:AgNt system to prepare blend polymer electrolytes. Ultraviolet-visible (UV-vis) spectrophotometry is used to confirm that the blended samples containing AgNt salt exhibit a broad absorption peak. From optical micrograph images it is apparent that small white specs appear on the surface of the samples. The SEM results clearly show the aggregated silver nanoparticles. The enlargement of the crystalline area was observed from the morphological emergence and impedance plots. The phase separation in SEM images was observed at high PEO concentration. The XRD consequences support the morphological manifestation. In this study a new approach is offered to explore the microstructures existing in the blend electrolytes. The width of the semicircle linked to crystalline phase in impedance spectra was found to be increased with the increase of PEO concentration. A slow increase of DC conductivity was observed at low temperatures while above 333 K an immediate change in DC conductivity was obtained. The rapid rise of DC conductivity at high temperatures is correlated with the DSC results and impedance studies at high temperatures.
Solid polymer electrolytes (SPEs) have emerged as a new class of electrolyte materials for the replacement of the conventional organic sol–gel electrolyte. This is due to their long life, safety, processability, flexibility, and both electrochemical and dimensional stabilities1. They are promising materials since they tend to eliminate other problems of harmful gas production and corrosive solvent leakage along with their wider applications in electrochemical devices, fuel cells and electrochromic windows2. Polymer blending is a convenient method to create novel polymeric materials that are able to yield with the superior property profiles as compared to those of the individual components. This method is usually far less costly and time-consuming for the production of polymeric materials with new properties3. The polymer blending is also known to be one of the most promising and possible ways to enhance the ionic conductivity of polymeric electrolyte membranes. However, a high degree of polymeric blending can result in poor mechanical properties in SPEs4. It is well reported that polymer electrolytes with good mechanical stability are promising to be obtained with the polymer blending technique5. Chitosan (CS) is an organic biodegradable polymer-chelating membrane with non-porous structure. A pure chitosan film shows low ionic conductivity while it is a desirable material for film formation6. In our earlier works, for the chitosan-based solid electrolytes, the reductions of silver ions to silver nanoparticles have been observed7,8,9,10,11. The existences of lone pair electrons on chitosan functional groups are found to be responsible for the reduction of silver ions to silver particles and complexation12. In our previous works we observed that chitosan incorporated with silver salts exhibits an almost amorphous phase7,11. On the other hand, polyethylene oxide (PEO) has been widely used as a good host macromolecule to prepare solid, gel and nanocomposite polymer electrolytes13,14,15,16. PEO-based SPEs have shown to exhibit a rich phase behavior, depending on the temperature, salt concentration, and thermal history. Moreover, high concentrations of a wide variety of salts are highly capable to be dissolved in PEO-based electrolytes15,16. PEO is a linear polymer with a helix structure. PEO is inexpensive and is amorphous above its melting point (Tm). It is crystalline below 60 °C17. The semi-crystalline structure of PEO presents inherent problems as a polymer matrix for ion transport18. One of the most promising alternate choices of enhancing the amorphous phase and increase of ionic conductivity in PEO based electrolyte systems is blending of PEO with a suitable higher amorphous polymer19,20. Various polymers such as polyvinyl pyrrolidone (PVP)19, boroxine ring polymer (BP)20, polyacrylonitrile (PAN)21,22, and poly (dithiooxamide) (PDTOA)23 has been blended with PEO based electrolytes. The crystallinity of PEO based electrolytes extensively studied by various researchers through the XRD and DSC measurements21,22,24,25. Several authors reported the Tm value for PEO-based solid polymer electrolytes. Karmakar et al. reported the Tm of about 55.9 for PEO:LiI solid electrolyte system24. Chun-Guey et al.21, reported the Tm value of 56 for PEO:LiClO4 polymer electrolytes. The melting temperature (Tm) is the temperature at which transformation of phase takes place in PEO based electrolytes and thus a steep increase of ionic conductivity may be observed from the plots of DC conductivity versus 1000/T. In former studies, a rapid increase of DC conductivity over 60 °C in PEO based solid polymer electrolytes has not been observed23,26,27,28,29. The finding of crystalline phases of PEO throughout the impedance plots has not been described previously21,22,23,24,25,26,27,28,29.
It was recognized that ion conducting electrolytes are the heart of electrochemical devices. Charge carriers accountable for transfer are cations of the dissolved salts. It is central for the materials to be first characterizes and after that chosen for a desired application based on their DC conductivity. It is well accepted in PEO that the alteration of phase transition happens from semi-crystalline to amorphous phase at temperatures higher than 60 °C4. Consequently, below the Tm of PEO extra semicircle due to crystalline phase of PEO may be appeared in impedance plots. The objective of this work is to show how the change of phase transition can be studied from the pattern of DC conductivity versus 1000/T. Furthermore, an obvious increase of crystalline phase as a consequence of an increase of various amounts of PEO to CS-based electrolyte has also predicted from the impedance plots. The morphological changes and structural appearances are helpful to understand the electrical properties. In the present work the role of phase transition of PEO on sudden increase of DC conductivity above 60 °C is presented in chitosan:PEO based blend electrolytes. To support the electrochemical impedance results and sudden increase in DC conductivity, various techniques, such as UV-vis, OM, SEM, XRD and DSC, were also used.
Materials and sample preparation
In the present work, 80 wt.% of chitosan (≥75% deacetylated, average molecular weight 1.1 × 105 g/mol) and 20 wt.% of silver nitrate (AgNO3) with a molecular weight 169.87 g/mol were used to prepare silver ion conducting polymer electrolyte using 1% acetic acid as a solvent. The chitosan (CS) powder was first dissolved in 100 ml of the acetic acid solvent. The solution was then stirred for more than 24 hours until a clear viscous solution of chitosan was obtained. The silver nitrate (20 wt.%) was then added to the prepared CS solutions with continuous stirring to prepare CS:AgNt electrolyte. On the other hand, different ratios of PEO were separately dissolved in acetonitrile with continuous stirring. Then, the PEO solutions were added to the prepared CS:AgNt solution to obtain polymer blend electrolytes. The polymer blend electrolyte samples were coded as CSPEN1, CSPEN2, CSPEN3, CSPEN4 and CSPEN5 for CS:AgNt incorporated with 10 wt.%, 20 wt.%, 30 wt.%, 40 wt.% and 50 wt.% of PEO, respectively. The mixtures were continuously stirred to obtain homogeneous solutions. Subsequently, the mixtures were cast in different Petri dishes and they were left to be dried to form films at room temperature. The films were transferred into a desiccator containing blue silica gel desiccant for further drying. This procedure produces solvent-free films.
Characterization techniques
The ultraviolet-visible (UV-Vis) absorption spectra of the prepared films were recorded on a UV-vis spectrometer (V-570, Jasco, Japan) with the scanning range from 180 to 1000 nm. To investigate the surface microstructure of the nanocomposite films, optical micrograph (OM) images were taken. The images at adjusted magnification were taken by an optical microscope (MEIJI) through an attached camera-controlled (DINO-LITE) software. Furthermore, the surface morphology of the samples was examined by scanning electron micrograph (SEM) using the FEI Quanta 200 FESEM scanning electron microscope. X-Ray Diffraction XRD patterns were recorded using Empyrean X-ray diffractometer, (PANalytical, Netherland) with operating current and voltage of 40 mA and 40 kV, respectively. The samples were scanned with a beam of monochromatic CuKα X-radiation of wavelength (λ = 1.5406 Ǻ), and the glancing angles X-ray diffraction was in the range of 5° ≤ 2θ ≤ 80° with a step size of 0.1°.
The impedance of the films was measured using HIOKI 3531 Z Hi-tester within a frequency range of 50 Hz to 1000 kHz. The SPE blend films were cut into small discs (2 cm diameter) and sandwiched between two stainless steel electrodes under spring pressure. The measurements were also carried out at different temperatures ranging from 303 K to 363 K.
UV-vis analysis
Figure 1 illustrates the absorption spectra for pure CS and CS:AgNt films. The broad absorption peak was observed at about 364 nm in the Uv-vis spectra of pure CS. This peak clearly can be seen in the Uv-vis spectrum of CSPE5 sample (see Fig. 2). The peak at 360 nm is a characteristic of π-π* transitions related to the presence of carbonyl groups (C=O) in CS polymer7. It is noticeable that no absorption peaks within the wavelength ranges of 380–580 nm are present for the pure CS, whereas a broad absorption peak at 416 nm were achieved for the CS:AgNt sample. This peak appeared again with improved intensity in the UV-vis spectra of the blended samples as shown in Fig. 2. Clearly the band appeared at 416 nm for CS:AgNt system (see Fig. 1) shifted in the blend samples and its intensity increased. As talked about in introduction section silver ions may reduce to silver nano-particles in CS based electrolytes. This can be considered as a big problem in front of the development of silver ion conducting based polymer electrolytes. That is the reduction of silver ions to metallic nano-particles can be regarded as the main drawbacks of silver ion conducting polymer electrolytes for electrochemical devise applications. Other researchers reported the reduction of silver ions in polymer electrolytes. Chandra et al.30, measured the contribution of electronic conductivity of metallic silver nanoparticles through the using of the Wagner's polarization technique. They concluded that the electronic contribution to the conductivity increases as a result of the reduction of Ag+ ions into metallic silver particles in PEO-based electrolytes. UV-vis is a simple technique which can be used to detect the existence of silver nanoparticles in polymer-based electrolytes. It is well reported that particles with nanometer dimensions exhibit an intense absorption band so-called surface plasmon resonance (SPR) absorption band. This is originated from the movement of the conduction electrons within the particles, as a consequence of the electric field of incident light31. The local electromagnetic fields at the nanoparticle surface can be manipulated as well as enhanced through the plasmon resonance. Different metals exhibit different resonant photon wavelength32. The increase of band intensity (see Fig. 2) can be ascribed to the increase of silver nanoparticles7,9,10,11,12. The peak position, maximum absorbance and band shape of the surface plasmon resonance band are found to rely on particle structure, size, geometry, polydispersity and dielectric constant of the medium31. As it is well known, CS has two important functional groups, OH and NH groups6. Thus the results of UV-vis investigation reveal the formation of silver nanoparticles through the appearance of SPR peaks. Chitosan is a nontoxic natural biopolymer. An earlier study reported that the usages of environmentally-friendly materials in the generation of metal nanoparticles are of great importance33. Even the objective of this study is not to synthesize of metallic particles but the results show the availability of chitosan polymer to fabricate metallic silver particles. As stated in the introduction section ion conducting electrolytes are the central part of electrochemical devices and it is fundamental for the materials to be first analyzed and then chosen for a preferred application. In this study, to receive more information about the formation of silver nano-particles, morphological studies have been carried out.
UV-vis spectra for pure CS and CS:AgNt samples.
Uv-vis spectra for blended samples.
Morphological and Structural Study
The surface morphology of CS:AgNt system which was observed by the optical microscopy (OM), is shown in Fig. 3. The tiny white spots can be seen on the surface of CS:AgNt film. These white spots were attributed to metallic silver nanoparticles. Thus the OM image in agreement with UV-vis result of CS:AgNt system (see Fig. 1). The surface morphology of the blended electrolyte samples were shown in Fig. 4(a–e). It is evident from the images that some white spots are appeared and their size increased as the PEO concentration was increased up to 30 wt.%. Similar evidence has also been observed using optical micrograph technique by other researchers34. These results are in consistent with the distinguishable SPR peaks of Fig. 2. It is apparent that the morphology of the samples are significantly distorted at the PEO concentration of 40 and 50 wt.%. From the prior studies, it is confirmed that the crystalline regions, which consists of a number of platelets or lamellae radiating from a nucleating centre (so- called spherulites), are emerged in the PEO based electrolytes35,36,37. The dark regions between the adjacent lamellae remain amorphous37. Previous studies confirmed that spherulite is interrelated to crystalline region and the region between spherulite interface ascribed to amorphous phase24,35,36,37. Crystallization and melting phenomena are identified to have a noteworthy impact on the electrical properties of semicrystalline polymer electrolytes. Increasing stiffness and alignment of polymer chains into lamellae powerfully affect the ionic conductivity, through the blocking of ions and reduced polymer chain motions36. It is well recognized that the degree of crystallinity of the polymer electrolyte has an important role on the ionic conductivity35. Reddy and Chu have studied the morphology of PEO: potassium iodide (KI) electrolyte using the optical micrograph method38. They observed that the size of the spherulites decreases with increasing the KI concentration and thus increasing the dark region. In this study, the amount of dissolved salt is fixed through the host chitosan polymer. Therefore, the increase of spherulites is predictable here since the concentration of PEO is increased, while the CS:AgNt system is kept constant. From Fig. 4d, it is obvious that the crystalline enlargement can be evidently seen at 40 wt.% PEO. The manifestation of interconnected spherulitic structure with some amorphous region (see Fig. 4e) is evidence for the fact that incorporation of a high amount of semi-crystalline PEO into the CS:AgNt system can affect the crystalline structure of the system, which may reduce the ionic conductivity. Marzantowicz at al, have studied the morphology and impedance spectroscopy of PEO based electrolytes35. They observed that the diameter of the second semicircle directly increased with increasing the crystallization of PEO based electrolyte. Consequently, the examination of impedance spectra may provide more insights on the crystallization phenomena in the present work.
Optical micrograph image for CS:AgNt sample.
Optical micrograph image for (a) CSPEN1, (b) CSPEN2, (c) CSPEN3, (d) CSPEN4 (the dark regions indicated by green rows are amorphous phases) and (e) CSPEN5 (the spherulites indicated by green rows are distinguishable crystalline phases).
The SEM images have been taken for the selected samples to support the UV-vis and OM results. The SEM technique was found to be an efficient way to study the structure and morphology of the surface of the samples. On the other hand, one of SEM technique's advantages is that the choice of magnification is broadly allowing us to simply focus on a particular area of the sample39. It is well-known that the surface morphology and structure of the polymer electrolyte films are the key properties to identify their behavior. The electron image was taken at 1000 × magnification. Prior to examination, the films were attached to aluminum holder using a conductive tape and then coated with a thin layer of gold. The SEM image and XRD result for CS:AgNt (base material) system is shown in Fig. 5. Small white specs due to Ag° nanoparticles appear on the film surface. The non-existent of phase separation and broad XRD peaks reveal the amorphous structure of CS:AgNt sample. The SEM images for selected polymer blend electrolyte films are shown in Fig. 6(a–c). The SEM characterizations were carried out in order to observe the morphological changes in the blended samples. As can be seen from the figure, the film with 10 wt.% of PEO reveals a smooth morphology with some white specs, indicating the formation of silver nanoparticles. The EDX spectrum was taken on white spots for CSPEN3 samples (Fig. 6b). Clear peaks for metallic silver particles could be seen in the EDX spectrum (see Fig. 6d). Therefore, the SPR peaks appeared in Figs 1 and 2 for all the samples are strongly supported by the obtained SEM and EDX results. It is also important to state that, in our previous works, white spots and chains of silver nano-particles was obtained in our SEM image analysis carried out on chitosan samples incorporated with silver salts7,8,10,11,40. An irregular wavelike surface structure appeared in Fig. 6a is a result of the polymer-salt complexation. It is well reported that smooth morphology appearance is closely associated with the nature of amorphous phase of the polymer blend electrolyte complex41. The phase separation was found to be little at 10 wt.% of PEO (see Fig. 6a), while an obvious phase separation was observed at 30 and huge amount of phase separation can be seen at 50 wt.% of PEO as depicted in Fig. 6b and Fig. 6c. Clearly, the wide phase separation regions at 50 wt.% PEO were occurred. It is well reported that the absence of phase separation and appearance of smooth surface morphology in blend electrolytes are evidences for the amorphous nature of the system41. The XRD results were provided along the morphological appearances in Fig. 6a–c. The two obvious peaks appeared at 2θ ranges from 17° to 23° are ascribed to crystalline peaks of PEO as reported in literature16,19,22,24,25. Noticeably with increasing PEO concentration the strength of crystalline peaks owing to PEO increased and the amorphous phases are suppressed. Thus the XRD results seriously support the OM images and SEM micrographs. In our previous work, we have used SEM technique to detect crystalline structures attributing to the ion pairs formation in chitosan:NaTf solid electrolyte system at high NaTf salt concentration. The SEM results were utilized to explain the drop in DC conductivity in chitosan:NaTf system at high NaTf salt concentration39. Kadir et al., have also used SEM to identify the protruded crystalline structures of salts in chitosan based solid electrolytes at high salt concentrations42,43. The results of the present work illustrate that SEM method can also be used to detect the phase separation in blend polymer electrolytes. The appearance of huge amount of phase separation is an evidence for the large amount of crystalline phases as found in XRD results. The results of OM study strongly supported the SEM images.
SEM image and XRD result for CS:AgNt system.
SEM images and XRD results for (a) CSPEN1, (b) CSPEN3, (c) CSPEN5 samples and (d) EDX analysis for white specs appeared on CSPEN3 samples.
Impedance and Bode Plot Study
Figure 7(a–e) demonstrates the impedance spectra for all the polymer blend electrolyte samples incorporated with different concentration of PEO at room temperature (303 K). Typical ac impedance plot usually exhibits two distinct regions: The high frequency semi-circle and the low-frequency spike regions. The semi-circle and spike observed at high and low frequency regions have been ascribed to the process of ionic conduction through the bulk of polymer electrolytes and the effect of blocking electrodes (electrode polarization), respectively44. The electrolyte/electrode interface can be thought as a single capacitor, since the blocking electrodes have been employed in the impedance analysis. In the impedance plot, the capacitor should exhibit a vertical spike when it is considered to be ideal45,46. It is obvious that the second semicircle is observed at 20 wt.% PEO. It is attractive here to note that the diameter of the second semicircle has been increased with increasing PEO concentration up to 40 wt.% and the first semicircle shifted towards the origin. Eventually, when the PEO concentration reaches 50 wt.%, the first semicircle disappeared and the resistivity of the sample considerably increased. In other words at 30 wt.%, 40wt.% and 50 wt.% of PEO more crystalline fractions due to PEO was introduced to the CS:AgNt (base material) system, consequently the second semicircle suppresses the first semicircle and thus difficult to distinguish it. In this work we concluded the fact that: The diameter of second semicircle in impedance plots has found to be increasing with increasing PEO concentration, i. e., the second semicircle is related to the crystalline fraction. At 50 wt.% of PEO only one semicircle with big diameter can be distinguished, which indicated the increase of resistivity. It is well documented in polymer electrolytes the crystalline regions block the ionic motion and thus a decrease in conductivity. These results are in accordance with our morphological appearance as seen in Figs 4 and 6. Previous study also reported that development of spherulite primarily has a moderately minute effect on the conductivity however the conductivity very much decreases as the crystallites begun to comprise a bulky fraction of the polymer35. The most probable explanation for the decrease in conductivity is the densification of the structure (i.e., the increase of crystalline arrangement between existing lamellae). Consequently, this causes the charge carrier transport to be hindered in the remaining amorphous phase. In such a compact structure, a small modification of amorphous phase content can produce an interruption in continuity of the easy conduction path. On the other hand, the sample stiffness and a loss of contact with electrodes can be another reason for such large drop of conductivity36. Many studies have been established that the process of ion transport in amorphous portion of polymer electrolytes is high and its mobility decreases with increasing crystalline regions45,47,48. The results achieved in the present work reveal that the properties and structure of materials are strongly correlated. Comparing the graphs of Figs 6 and 7, it is easy to grasp that the growth of crystalline region in polymer electrolytes is an obstacle in front of researchers to develop a solid electrolyte system with high DC conductivity at room temperature. From the OM images, it is appeared that the crystalline region increases with increasing the PEO concentration and consequently the second semicircle increases continuously until disappearing the first semicircle and thus a drop in DC conductivity occurs. A clear phase separation (SEM images) and intense crystalline peaks (XRD patterns) observed in Fig. 6 strongly supports the fact that a wide region of crystalline phase has developed at high PEO concentration. Consequently, a drop in conductivity can be predictable because ion transport almost occurs in an amorphous phase of polymer electrolytes. The role of crystalline and amorphous phases on conductivity could be more understood from the study of impedance spectra at selected temperatures as can be seen in later sections.
Impedance plots for (a) CSPEN1, (b) CSPEN2, (c) CSPEN3, (d) CSPEN4 and (e) CSPEN5 films at ambient temperature.
To support our interpretation and our proposed semicircles for Nyquist plots we have studied the Bode plots. From the electrochemical viewpoints Bode plots are crucial to be studied in order to understand the charge transfer in electrolyte materials. Figure 8 shows the Bode plot (phase angle vs frequency) for the all blend electrolyte samples. It is obvious that the at 10 wt. of PEO one peak can be distinguished while at higher PEO concentration two peaks can be separated as observed in impedance plots. According to Ali Eftekhari49, three regions should be distinguished in Bode plots which are capacitive, diffusion and charge transfer regions. The capacitive region (plateau region) can be manifested at a very low frequency usually from 10−2 to 10° Hz. Thus due to frequency limitations this region cannot be evident in the Bode plot of the present work. As discussed in impedance plots (see Fig. 7), the first peak is related to ion transfer in amorphous phase of the electrolytes and the second peak is ascribed to crystalline phase of PEO and its strength increases until suppress the first peak at 50 wt.% of PEO. Thus the increase of PEO up to 50 wt.% responsible for the increase of resistivity. To establish the fact that each peak corresponds to an electric circuit, Bode magnitude was plotted for selected samples as depicted in Fig. 9(a–c). The fitting data corresponds to electrical equivalent circuit of each sample. The insets of Fig. 9 show the equivalent circuits. It is clear from Fig. 7a that the impedance of the system consists of one semicircle and spike at low frequency region. Thus we have one equivalent circuit for this system. For other impedance plots of Fig. 7 two semicircles can be drawn. Each semicircle corresponds to an equivalent electric circuit. Consequently, Bode magnitude strongly supports the Bode plot of Fig. 8 and both of them confirms our interpretation and our proposed semicircles for the impedance plots (see Fig. 7). That is, each semicircle represents the parallel combination of a resistor and capacitor. The spikes at low frequency regions are usually represented by constant phase elements (CPE) which are in series with equivalent circuit elements. Other researchers used equivalent circuits to interpret the Bode and impedance plots50,51.
Bode plot (phase angle vs frequency) for all the blend electrolyte samples.
Bode magnitudes (experimental and fitting) for (a) CSPEN1, (b) CSPEN2 and CSPEN5 blend electrolyte samples.
DC Conductivity Analysis
Figure 10 shows the DC conductivity versus 1000/T for the blend electrolyte samples. Two regions can be distinct throughout the temperature range of interest. At low temperatures, the DC conductivity gradually increases with increasing temperature, while above 333 K an abrupt change in DC conductivity is observed. This rapid increase of DC conductivity at high temperatures can be attributed to the phase change of PEO from semi-crystalline to amorphous phase. Earlier studies have also correlated the abrupt increases of DC conductivity at around 60 °C to the typical semi-crystalline amorphous phase transition in PEO4,52. The linear variation of DC conductivity with temperature below 60 °C implies that ion transport occurs through the thermally activated processes and thus obeys an Arrhenius behavior52. Clearly, the temperature dependence of DC conductivity shows curvature above 60 °C. The expansion of polymer matrix with temperature increase forms free volume and empty spaces, in which the ions migrate. This facilitates the ion mobility and reduces the ion cloud effect at electrode/electrolyte interface2. It is established that greater ionic diffusivity can be accomplished in the case of amorphous polymers53. As amorphous zones gradually develop in the region II (above 60 °C), the polymer chain obtains faster internal modes, in which segmental motion is produced due to the bond rotations. This consecutively favors the inter-chain hopping movement of ions, and thus leads to an improvement of conductivity for the polymer electrolyte. Above the melting point, the order of variation in the conductivity is small due to the complete amorphous nature of the electrolyte samples54. The curvatures of the Arrhenius-type plots show that the ionic conduction follows the relation of Vogel–Tamman–Fulcher (VTF)8, in which the transport properties in a viscous matrix are described. The VTF relation supports the idea of the existence of ion movement through the plasticizer-rich phase, i.e55.
$${\sigma }_{dc}=A{T}^{-1/2}{\exp }^{(\frac{-{E}_{a}}{{K}_{B}(T-{T}_{O})})}$$
where, Ea and A are fitting constants that describe the activation energy and the number of charge carriers, respectively, KB is the Boltzmann constant and To refers to the system at equilibrium related to zero configurational entropy. It is found that To is approximately equal to Tg − 50 K, where Tg refers to the thermodynamic glass transition temperature of the system54,55. Therefore, the segmental motion either creates a channel for the ions to move or allows the ions to hop from one site to another. In other words, the translational ionic motion is facilitated by the segmental movement of the polymer45,56. To verify that the abrupt increase of DC conductivity is associated to the alteration of phase of PEO from crystalline to amorphous phase above 60 °C the DSC was carried out on selected samples. Figure 11 shows the DSC plots for the CSPE3 and CSPE 5 films. Clear peaks due to the melting point of PEO were observed at 57.32 °C and 58.73 °C for CSPE3 and CSPE 5 samples, respectively. The results of DSC measurement are analogous with those reported in the literature for PEO based blend and composite electrolytes15,23,24,25. More insights into the happening of change of phase in PEO based blend electrolytes of the present work may be accomplished throughout the impedance study at a different temperature. We showed in section 3.3 that the growth of second semicircle in impedance plots is associated to the crystalline phase of PEO content. Based on temperature dependent DC conductivity and DSC measurements the second semicircle which is due to crystalline phase must be disappeared at 60 °C and above in impedance spectra. Figure 12 shows the impedance plots of CSPN3 system at various temperatures. It is evident from the figure that the diameter of the second semi-circle is decreased with increasing temperature from 40 °C to 50 °C. The second peak resulting from crystalline region was disappeared at 60 °C and 70 °C. It is important to note that, the existence of the slanted points and incomplete semi-circle at low and high frequencies, respectively, are evidence for the amorphous nature of the samples at high temperatures. In addition to our observation, other researchers have also related the observed spike and incomplete semicircle to the bulk conductivity of polymer electrolytes and polymer nanocomposites, respectively39,42,48,57,58,59. This indicates that the change of phase transition from crystalline to amorphous occurs above 60 °C and it has a great role on ion transport. Therefore, the abrupt increase of DC conductivity at ~60 °C is a resultant of amorphousness of the samples at high temperatures. The schematic diagram shown in Fig. 13 illustrates the transition from semi-crystalline to amorphous phase with increasing temperature to above the Tm of PEO polymer. Comparing Figs 7 and 12 is vital to understand the increase and decrease of crystalline phases. In Fig. 7 it was observed that with increasing PEO ratio the second semicircle diameter also increased due to the increase of crystalline phase. However in Fig. 12 it was shown that with increasing temperature the second semicircle size in impedance plots decreased and from 60 °C and above only one semicircle can be distinguished which indicated the homogeneity of the system (i. e., completely amorphous phase). Other researchers were also observed incomplete semicircular arcs in the high frequency region and spikes at low frequency regions for high ion conducting membranes based on chitosan60,61. It is well established that polymer electrolytes are heterogeneous materials, owing to the existing of both amorphous and crystalline phases. Thus with increasing temperature to Tm of PEO the system was changed to homogeneous material with high DC conductivity. The results of the present work are an example of precision in measurement. Exactly at about of Tm of PEO the second semicircle disappeared.
DC conductivity versus 1000/T for (a) CSPEN1, (b) CSPEN2, (c) CSPEN3, (d) CSPEN4 and (e) CSPEN5 films.
DSC measurements for (a) CSPE3 blend sample and (b) CSPE5 blend sample.
Impedance plots for CSPEN3 sample at different temperatures. Clearly from 30 to 50 °C the diameter of second semicircle in impedance plots decreases. From 60 °C and above only one semicircle can be distinguished.
Shows the proposed phase of change from semi-crystalline to amorphous phase above Tm of PEO.
Various amounts of PEO were incorporated into the CS:AgNt system to fabricate blend polymer electrolytes with enhanced ion transport mechanism. From the UV-vis spectra measurements, wide surface plasmonic resonance (SPR) absorption peaks owing to metallic silver particles were observed. The result was further supported by optical microscopy (OM), SEM and EDX analysis. Small white specs were found on the surface of the samples using the both imaging (OM and SEM) techniques. The spherulites observed in OM micrographs and phase divisions emerged in SEM images are ascribed to crystalline phases of PEO polymer. The manifestation of powerful crystalline peaks of PEO at 50 wt.% is answerable for observing numerous spherulites in OM images and wide phase separations in SEM micrographs. The strong crystalline peaks emerged at 50 wt.% of PEO is responsible for the raise of resistivity. The dominant of crystalline phase at 50 wt.% PEO is accountable for vanishing the first semicircle in impedance plots at room temperature. The DC conductivity was determined to increase with temperature slowly at low temperature ranges, while an rapid change in DC conductivity was observed above 333 K. The rapid increase of DC conductivity at high temperatures is correlated to the change of phase transition of PEO from semi-crystalline to amorphous phase as indicated from DSC measurements. The results of impedance study for various concentrations of PEO at ambient temperature and impedance spectra at selected temperatures reveals that impedance measurement can be used as a unique approach to exploring the microstructure of polymer electrolyte based materials.
Hema, M., Selvasekarapandian, S., Arunkumar, D., Sakunthala, A. & Nithya, H. FTIR, XRD and ac impedance spectroscopic study on PVA based polymer electrolyte doped with NH4X (X = Cl, Br, I). J. Non-Cryst. Solids 355, 84–90 (2009).
Ramesh, S., Liew, C.-W., Morris, E. & Durairaj, R. Effect of PVC on ionic conductivity, crystallographic structural, morphological and thermal characterizations in PMMA–PVC blend-based polymer electrolytes. Thermochim. Acta 511, 140–146 (2010).
He, Y., Zhu, B. & Inoue, Y. Hydrogen bonds in polymer blends. Prog. Polym. Sci. 29, 1021–1051 (2004).
Chandra, A., Agrawal, R. C. & Mahipal, Y. K. Ion transport property studies onPEO–PVP blended solid polymer electrolyte membranes. J. Phys. D: Appl. Phys. 42, 135107 (4pp) (2009).
Sivakumar, M., Subadevi, R., Rajendran, S., Wu, N.-L. & Lee, J.-Y. Electrochemical studies on [(1 − x)PVA–xPMMA] solid polymer blend electrolytes complexed with LiBF4. Mater. Chem. Phys. 97, 330–336 (2006).
Cao, L. et al. Biopolymer-Chitosan based supramolecular hydrogels as solid state electrolyte for electrochemical energy storage. Chem. Commun. 53, 1615–1618 (2017).
Aziz, S. B., Abdullah, O. Gh & Hussein, S. A. Role of Silver Salts Lattice Energy on Conductivity Drops in Chitosan Based Solid Electrolyte: Structural, Morphological and Electrical Characteristics. J. Electron. Mater. 47, 3800–3808 (2018).
Aziz, S. B., Abdullah, O. Gh & Rasheed, M. A. A novel polymer composite with a small optical band gap: New approaches for photonics and optoelectronics. J. Appl. Polym. Sci. 134, 44847 (2017).
Aziz, S. B., Abidin, Z. H. Z. & Arof, A. K. Influence of silver ion reduction on electrical modulus parameters of solid polymer electrolyte based on chitosan-silver triflate electrolyte membrane. Express Polym. Lett. 5, 300–310 (2010).
Aziz, S. B. & Abidin, Z. H. Z. Electrical and morphological analysis of chitosan: AgTf solid electrolyte. Mater. Chem. Phys. 144, 280–286 (2014).
Aziz, S. B., Abidin, Z. H. Z. & Kadir, M. F. Z. Innovative method to avoid the reduction of silver ions to silver nanoparticles (Ag+ → Ag°) in silver ion conducting based polymer electrolytes. Phys. Scr. 90, 035808 (9pp) (2015).
Aziz, S. B. et al. Polymer Blending as a Novel Approach for Tuning the SPR Peaks of Silver Nanoparticles. Polymers 9(10), 486 (2017).
Fiory, F. S. et al. PEO based polymer electrolyte lithium-ion battery. J. Eur. Ceram. Soc. 24, 1385–1387 (2004).
Li, W. et al. A PEO-based gel polymer electrolyte for lithium ion batteries. RSC Adv. 7, 23494–23501 (2017).
Fullerton-Shirey, S. K. & Maranas, J. K. Effect of LiClO4 on the Structure and Mobility of PEO-Based Solid Polymer Electrolytes. Macromol. 42, 2142–2156 (2009).
Sharma, J. P., Yamada, K. & Sekhon, S. S. Conductivity Study on PEO Based Polymer Electrolytes Containing Hexafluorophosphate Anion: Effect of Plasticizer. Macromol. Symp. 315, 188–197 (2012).
Wang, G. X., Yang, L., Wang, J. Z., Liu, H. K. & Dou, S. X. Enhancement of Ionic Conductivity of PEO Based Polymer Electrolyte by the Addition of Nano size Ceramic Powders. J. Nanosci. Nanotech. 5, 1135–1140 (2005).
Wang, W. & Alexandridis, P. Composite Polymer Electrolytes: Nanoparticles Affect Structure and Properties. Polymers 8(11), 387 (2016).
Koduru, H. K. et al. Investigations on Poly (ethylene oxide) (PEO) – blend based solid polymer electrolytes for sodium ion batteries. J. Phys. Conf. Ser. 764, 012006 (2016).
Yang, Y., Inoue, T., Fujinami, T. & Mehta, M. A. Ionic Conductivity and Interfacial Properties of Polymer Electrolytes Based on PEO and Boroxine Ring Polymer. J. Appl. Polym. Sci. 84, 17–21 (2002).
Chun-Guey, W., Chiung-Hui, W., Ming, L. & Huey-Jan, C. "New Solid Polymer Electrolytes Based on PEO/PAN Hybrids". J. Appl. Polym. Sci. 99, 1530–1540 (2006).
Kim, M., Lee, L., Jung, Y. & Kim, S. Study on Ion Conductivity and Crystallinity of Composite Polymer Electrolytes Based on Poly(ethylene oxide)/Poly(acrylonitrile) ContainingNano-Sized Al2O3 Fillers. J. Nanosci. Nanotechnol. 13, 7865–7869 (2013).
Jo, G., Jeon, H. & Park, M. J. Synthesis of Polymer Electrolytes Based on Poly(ethylene oxide) andan Anion-Stabilizing Hard Polymer for Enhancing Conductivity and Cation Transport. ACS Macro. Lett. 4, 225–230 (2015).
Karmakar, A. & Ghosh, A. Poly ethylene oxide (PEO)-LiI polymer electrolytes embedded with CdO nanoparticles. J. Nanopart. Res. 13, 2989–2996 (2011).
Elbellihi, A. A., Bayoumy, W. A., Masoud, E. M. & Mousa, M. A. Preparation, Characterizations and Conductivity of Composite Polymer Electrolytes Based on PEO-LiClO4 and Nano ZnO Filler. Bull. Korean Chem. Soc. 33(9), 2949–2954 (2012).
Kanga, Y., Kima, H. J., Kima, E., Ohb, B. & Cho, J. H. Photocured PEO-based solid polymer electrolyte and its application to lithium-polymer batteries. J. Power Sources 92, 255–259 (2001).
Karan, N. K., Pradhan, D. K., Thomas, R., Natesan, B. & Katiyar, R. S. Solid polymer electrolytes based on polyethylene oxide and lithiumtrifluoro-methane sulfonate (PEO–LiCF3SO3): Ionic conductivityand dielectric relaxation. Solid State Ionics 179, 689–696 (2008).
Yang, R., Zhang, S., Zhang, L. & Liu, W. Electrical properties of composite polymer electrolytes based on PEO-SN-LiCF3SO3. Int. J. Electrochem. Sci. 8, 10163–10169 (2013).
Bandara, T. M. W. J. et al. Electrical and complex dielectric behaviour of composite polymer electrolyte based on PEO, alumina and tetrapropylammonium iodide. Ionics 23, 1711–1719 (2017).
Chandra, S., Hashmi, S. A., Saleem, M. & Agrawal, R. C. Investigations on poly ethylene oxide based polymer electrolyte complexed with AgNO3. Solid State Ionics 67, 1–7 (1993).
Angelescu, D. G., Vasilescu, M., Somoghi, R., Donescu, D. & Teodorescu, V. S. Kinetics and optical properties of the silver nanoparticles in aqueous L64 block copolymer solutions. Colloids Surf., A: Physicochem. Eng. Aspects 366, 155–162 (2010).
Aziz, S. B., Rasheed, M. A. & Ahmed, H. M. Synthesis of Polymer Nanocomposites Based on [Methyl Cellulose](1-x):(CuS)x (0.02 M ≤ x ≤ 0.08 M) with Desired Optical Band Gaps. Polymers 9(6), 193 (2017).
Loo, Y. Y., Chieng, B. W., Nishibuchi, M. & Radu, S. Synthesis of silver nanoparticles by using tea leaf extract from Camellia Sinensis. Int. J. Nanomed. 7, 4263–4267 (2012).
Sekhon, S. S., Singh, G., Agnihotry, S. A. & Chandra, S. Solid polymer electrolytes based on polyethylene oxide-silver thiocyanate. Solid State Ionics 80, 37–44 (1995).
Marzantowicz, M. et al. In situ microscope and impedance study of polymer electrolytes. Electrochim. Acta 51, 1713–1727 (2006).
Marzantowicz, M. et al. Crystallization and melting of PEO:LiTFSI polymer electrolytes investigated simultaneously by impedance spectroscopy and polarizing microscopy. Electrochim. Acta 50, 3969–3977 (2005).
Singh, P. K., Kim, K.-W. & Rhee, H.-W. Electrical, optical and photoelectrochemical studies on a solid PEO-polymer electrolyte doped with low viscosity ionic liquid. Electrochem. Commun. 10, 1769–1772 (2008).
Reddy, M. J. & Chu, P. P. Optical microscopy and conductivity of poly(ethylene oxide) complexed with KI salt. Electrochim. Acta 47, 1189–1196 (2002).
Aziz, S. B., Abdullah, O. G., Rasheed, M. A. & Ahmed, H. M. Effect of high salt concentration (HSC) on structural, morphological, and electrical characteristics of chitosan based solid polymer electrolytes. Polymers 9(6), 187 (2017).
Aziz, S. B., Mamand, S. M., Saed S. R., Abdullah, R. M. & Hussein, S. A. New Method for the Development of Plasmonic Metal-Semiconductor Interface Layer: Polymer Composites with Reduced Energy Band Gap. J. Nanomater. 2017, Article ID8140693 (2017).
Nadimicherla, R., Kalla, R., Muchakayala, R. & Guo, X. Effects of potassium iodide (KI) on crystallinity, thermal stability, and electrical properties of polymer blend electrolytes (PVC/PEO:KI). Solid State Ionics 278, 260–267 (2015).
Kadir, M. F. Z., Majid, S. R. & Arof, A. K. Plasticized chitosan–PVA blend polymer electrolyte based proton battery. Electrochim. Acta 55, 1475–1482 (2009).
Shukur, M. F. & Kadir, M. F. Z. Hydrogen ion conducting starch-chitosan blend based electrolyte for application in electrochemical devices. Electrochim. Acta 158, 152–165 (2015).
Aziz, S. B. Occurrence of electrical percolation threshold and observation of phase transition in chitosan(1 − x):AgIx (0.05 ≤ x ≤ 0.2)-based ion-conducting solid polymer composites. Appl. Phys. A 122, 706 (2016).
Aziz, S. B. & Abidin, Z. H. Z. Ion-transport study in nanocomposite solid polymer electrolytes based on chitosan: Electrical and dielectric analysis. J. Appl. Polym. Sci. 132, 41774 (2015).
Aziz, S. B. & Abdullah, R. M. Crystalline and amorphous phase identification from the tanδ relaxation peaks and impedance plots in polymer blend electrolytes based on [CS:AgNt]x:PEO(x-1) (10 ≤ x ≤ 50). Electrochim. Acta 285, 30–46 (2018).
Abdullah, O. G., Aziz, S. B. & Rasheed, M. A. Incorporation of NH4NO3 into MC-PVA blend-based polymer to prepare proton-conducting polymer electrolyte films. Ionics 24, 777–785 (2018).
Aziz, S. B., Abdullah, O. Gh. & Rasheed, M. A. Structural and electrical characteristics of PVA: NaTf based solid polymer electrolytes: role of lattice energy of salts on electrical DC conductivity. J. Mater. Sci. - Mater. Electron. 28, 12873–12884 (2017).
Eftekhari, A. The mechanism of ultrafast supercapacitors. J. Mater. Chem. A 6, 2866–2876 (2018).
Cebeci, F. Ç., Geyik, H., Sezer, E. & Sarac, A. S. Synthesis, electrochemical characterization and impedance studies on novel thiophene-nonylbithiazole-thiophene comonomer. J. Electroanal. Chem. 610, 113–121 (2007).
Vergaz, R., Barrios, D., Sánchez-Pena, J.-M., Pozo-Gonzalo, C. & Salsamendi, M. Relating cyclic voltammetry and impedance analysis in a viologen electrochromic device. Sol. Energy Mater. Sol. Cells 93, 2125–2132 (2009).
Agrawal, R. C. & Mahipal, Y. K. Ashrafi, R. Materials and ion transport property studies on hot-press casted solid polymer electrolyte membranes: [(1 − x) PEO: x KIO3]. Solid State Ionics 192, 6–8 (2011).
Raj, C. J. & Varma, K. B. R. Synthesis and electrical properties of the (PVA)0.7(KI)0.3·xH2SO4(0 ≤ x ≤ 5)polymer electrolytes and their performance in a primary Zn/MnO2 battery. Electrochim. Acta 56, 649–656 (2010).
Hirankumar, G., Selvasekarapandian, S., Kuwata, N., Kawamura, J. & Hattori, T. Thermal, electrical and optical studies on the poly(vinyl alcohol) based polymer electrolytes. J. Power Sources 144, 262–267 (2005).
Baskaran, R., Selvasekarapandian, S., Hirankumar, G. & Bhuvaneswari, M. S. Vibrational, ac impedance and dielectric spectroscopic studies of poly(vinylacetate)–N,N–dimethylformamide–LiClO4 polymer gel electrolytes. J. Power Sources 134, 235–240 (2004).
Aziz, S. B., Woo, T. J., Kadir, M. F. Z. & Ahmed, H. M. A Conceptual Review on Polymer Electrolytes and Ion Transport Models. J. Sci.: Adv. Mater. Devices 3, 1–17 (2018).
Pradhan, D. K., Choudhary, R. N. P. & Samantaray, B. K. Studies of structural, thermal and electrical behavior of polymer nanocomposite electrolytes. eXPRESS Polym. Lett. 2(9), 630–638 (2008).
Pradhan, D. K., Choudhary, R. N. P. & Samantaray, B. K. Studies of Dielectric Relaxation and AC Conductivity Behavior of Plasticized Polymer Nanocomposite Electrolytes. Int. J. Electrochem. Sci. 3, 597–608 (2008).
Aziz, S. B., Abdullah, R. M., Rasheed, M. A. & Ahmed, H. M. Role of Ion Dissociation on DC Conductivity and Silver Nanoparticle Formation in PVA:AgNt Based Polymer Electrolytes: Deep Insights to Ion Transport Mechanism. Polymers 9(8), 338 (2017).
Wan, Y., Peppley, B. K., Creber, A. M. & Bui, V. T. Anion-exchange membranes composed of quaternized-chitosan derivatives for alkaline fuel cells. J. Power Sources 195, 3785–3793 (2010).
Leticia, G.-C., Casado-Coterillo, C., Iniesta, J., Montiel, V. & Irabien, A. Preparation and characterization of novel chitosan-based mixed matrix membranes resistant in alkaline media. J. Appl. Polym. Sci. 132, 42240 (2015).
The authors gratefully acknowledge the financial support for this study from Ministry of Higher Education and Scientific Research-Kurdistan Regional Government, Department of Physics, College of Science, University of Sulaimani, Sulaimani, and Komar Research Center (KRC), Komar University of Science and Technology. The authors appreciatively acknowledge the financial support from the Kurdistan National Research Council (KNRC)- Ministry of Higher Education and Scientific Research-KRG, Iraq for this research project.
Prof. Hameed's Advanced Polymeric Materials Research Laboratory, Department of Physics, College of Science, University of Sulaimani, Sulaimani, Kurdistan Regional Government, Iraq
Shujahadeen B. Aziz & Omed Gh. Abdullah
Komar Research Center (KRC), Komar University of Science and Technology, Sulaimani, 46001, Kurdistan Regional Government, Iraq
Department of Physics, Faculty of Science and Health, Koya University, University Park, Koysinjaq, Kurdistan Regional Government, Iraq
M. G. Faraj
Shujahadeen B. Aziz
Omed Gh. Abdullah
Shujahadeen B. Aziz analyzed the data and wrote the paper. Mohammad Gh. Farag and Omed Gh. Abdullah performed the experiments and reviewed the manuscript.
Correspondence to Shujahadeen B. Aziz.
Aziz, S.B., Faraj, M.G. & Abdullah, O.G. Impedance Spectroscopy as a Novel Approach to Probe the Phase Transition and Microstructures Existing in CS:PEO Based Blend Electrolytes. Sci Rep 8, 14308 (2018). https://doi.org/10.1038/s41598-018-32662-1
Blend Electrolyte
Chitosan (CS)
Polymer Electrolyte
Impedance Plots
Effect of Structural Features on Ionic Conductivity and Dielectric Response of PVA Proton Conductor-Based Solid Polymer Electrolytes
Maryam A. M. Saeed
Journal of Electronic Materials (2021)
Properties enhancement of carboxymethyl cellulose with thermo-responsive polymer as solid polymer electrolyte for zinc ion battery
Isala Dueramae
Manunya Okhawilai
Rio Kita
Ion transport, dielectric, and electrochemical properties of sodium ion-conducting polymer nanocomposite: application in EDLC
Mohit Madaan
A. L. Sharma
Journal of Materials Science: Materials in Electronics (2020)
Preparation of chlorinated poly(propylene carbonate) and its effects on the mechanical properties of poly(propylene carbonate)/starch blends as a compatibilizer
Xihua Cui
Jing Jin
Wei Jiang
Polymer Bulletin (2020)
Tailoring of the structural, morphological, electrochemical, and dielectric properties of solid polymer electrolyte
Anil Arya
Ionics (2019)
|
CommonCrawl
|
why is the boiling point of hydrogen sulfide low
More information on the use of this device is given in the history portion of our gas chemistry web site. Because of this, comparatively weak intermolecular forces exist for H2S and the melting and boiling points are much lower than they are in water. Also, they are the lighest of all molecules, so an equivalent amount of energy would raise their temperature more than it would the molecules of a heavier gas such as CO2. Unlike ice, which contains an open lattice of water molecules hydrogen-bonded to each other, solid hydrogen sulphide contains H 2 S molecules in close packing. Hydrogen sulfide and water boil at -60.7 oC and +100.0 oC, respectively. It only contains three atoms, why is hydrogen sulphide a bent molecule? - 17059792 1. Sulfur is not nearly as electronegative as oxygen so that hydrogen sulfide is not nearly as polar as water. Hydrogen is a chemical element with atomic number 1 which means there are 1 protons and 1 electrons in the atomic structure.The chemical symbol for Hydrogen is H. With a standard atomic weight of circa 1.008, hydrogen is the lightest element on the periodic table. It contains only three atoms which means it is small, and very strong covalent bonds connect these. A greater dipole, I thought, would result in stronger intermolecular forces, and, thereby, a greater boiling point? 5 points Why is the boiling point of hydrogen sulphide low ? The boiling point of water (H2O) is higher than the boiling point of hydrogen sulfide (H2S)Is it because the water molecule is less polar, more covalent, ionic, or more polar? Hydrogen sulfide and water boil at -60.7 oC and +100.0 oC , respectively. Log in. A covalent bond is a shared pair of electrons. …, wahan hogi jo aa ske most welcome jo na aa ske wo bhaad mein jaaye ..xD, calculate the no.pf atoms present in 40 g of sulphur....given atomic mass of S is 32, At equilibrium the amounts of N2O4 and N2 in a 3 liters flask are 7.64g and 1.56g 6 Answers. Liquid hydrogen (LH2 or LH 2) is the liquid state of the element hydrogen.Hydrogen is found naturally in the molecular H 2 form.. To exist as a liquid, H 2 must be cooled below its critical point of 33 K. However, for it to be in a fully liquid state at atmospheric pressure, H 2 needs to be cooled to 20.28 K (−252.87 °C; −423.17 °F). Its monatomic form (H) is the most abundant chemical substance in the Universe, constituting roughly 75% of all … Sulfur dioxide is a simple molecule. Sulfur is not nearly as electronegative as oxygen so that hydrogen sulfide is not nearly as polar as water. S + 1.5 O. This doesn't make sense to me because hydrogen chloride has a greater dipole than hydrogen sulfide doesn't it? Explain why sulfur dioxide has a low boiling point. Why is the boiling point of hydrogen sulphide low ? the interactions between particles, between molecules. Moreover, to form a hydrogen bond, hydrogen must bond with Oxygen , Fluorine, or Nitrogen. This process accounts for much of the native sulfur found in nature. (Chemical Discovery and Invention in the Twentieth Century, Sir William Tildon, 1917). Covalent bonding forms molecules. This is 18% greater than that of air. Industrial sodium sulfide due to impurities its color is pink, brown red, khaki. The combustion of Hydrogen sulfide [2] follow similar principles . E. Industrial Production Commercially hydrogen sulfide is obtained from "sour gas" natural gas wells. 7 degrees Celsius is the boiling point of the hydrogen sulphide, And the melting point of the hydrogen sulphide is -85. Secondary School. Substances with small molecules have low melting and boiling points, and do not conduct electricity. To put this into perspective, 1 mL of the gas distributed evenly in a 100-seat lecture hall is about 20 ppb. The oppositely charged areas among the molecules attract each other hence it takes more energy to pull them apart, a higher boiling point So the hydride for tellurium: H 2 Te (hydrogen telluride) has a boiling point of -4°C. Because it is considerably more polar. Its monatomic form (H) is the most abundant chemical substance in the Universe, constituting roughly 75% of all … You can specify conditions of storing and accessing cookies in your browser. A. D. Natural Abundance Natural gas contains up to several percent H2S(g) and as such are called sour gas wells from their offensive stench. -60. High boiling point and low melting point. Water also has a … shivangi2004 shivangi2004 Hydrogen have low boiling point because it's their is single covelent bond in H … You should look up the boiling points of H_2O, H_2S, H_2Se, H_2Te. Hydrogen sulfide is often produced from the microbial breakdown of organic matter in the absence of oxygen gas, such as in swamps and sewers; this process is commonly … This is a result of the significant hydrogen bonding present in the water molecule. CB. Boiling Point Definition: In a liquid the molecules are packed closely together with many random movements possible as molecules slip past each other. Can you figure out why this is so? Click here👆to get an answer to your question ️ Assertion: The boiling point of H2O is higher than the boiling point of H2S .Reason: H2S has a greater molecular mass than H2O . C. History Hydrogen sulfide has been known since early times. Water has strong hydrogen bonds between molecules. The use of sodium sulfide. Because of this, comparatively weak intermolecular forces exist for H2S and the melting and boiling points are much lower than they are in water. Chlorine has a boiling point of $238~\mathrm{K}$ while hydrogen chloride has a boiling point of $188~\mathrm{K}$. Similar Questions. The bond angle between S-H and S was about 175 degrees and the hydrogen bond distance is found to be about 2.778 angstroms – less than the sum of van der Waals radii of the H and S atoms. .why the boiling point of hydrogen is low 1 See answer sharmashubham9653 is waiting for your help. Hydrogen molecules are non-polar, so do not attract one another. Sulfur is not from these elements, therefore the dipole-dipole force between 2H2S is not a hydrogen bond. Water has such a high boiling point because of hydrogen bonds between molecules. Log in. * No hydrogen bonding (obviously, because for that, the hydrogen atoms need to be bonded to one of N, O, or F) * No dipole-dipole forces since H-H is totally non polar. O (S. x, SO depending on stoichiometry) Journal of The figures below show the boiling and melting point for organic sulfur compounds as sulfides, disulfides, thiols (mercaptans) and thiophenes, together with the molecular structures of the different compounds. Answer Save. This means that H 2 S has a much lower boiling point, -60.3°C rather than 100°C. Relevance. second, remember that water is also an example of hyrdogen bonding...hydrogen bonds link hydrogen with three of the most electronegative elements, that is, flourine, oxygen an nitrogen...so the presence of hydrogen bonding means additional intermolecular interactions and thus, higher boiling and melting temperatures. hope that helped ). G. Gas Density of H2S The density of hydrogen sulfide is 1.393 g/L at 25 oC and 1 atm. 2 + H. 2. Specific gravity, melting point, boiling point, also depending on the influence of impurities. The table shows the same numbers together with the molecular weight and density, as well as the numbers of carbon, hydrogen and sulfur in the molecules. The chemistry of H2S has been studied since the 1600s. The C-O bonds in dimethyl ether are polar with the oxygen partially negative and the adjacent C atom partially positive. ?________teko milta bhi nhi bcz u don't have dimaag ...xD btw meri shaadi ka venue : Dharti aur paatal ke beech jo Marrige banquet h The ability to form large numbers and networks of hydrogen bonds is responsible for many of the unique properties of water including its relatively high melting point, boiling point, heat capacity, viscosity, and low vapor pressure. F. Industrial Uses Hydrogen sulfide has few important commercial uses. 5 degrees Celsius, This site is using cookies under cookie policy. When rationalising boiling point differences, the first consideration is always the strength of the intermolecular forces between the molecules in the liquid. Because it has very weak intermolecular forces. One more up and you find that H 2 S (hydrogen sulfide) has a boiling point at -62°C. Why does hydrogen sulfide have a higher boiling point than hydrogen chloride? It takes more kinetic energy, or a higher temperature, to break the hydrogen bonding between water molecules, thus allowing them to escape as steam. Appearance Hydrogen sulfide is a colorless gas with an offensive stench and is said to smell like rotten eggs. It is poisonous, corrosive, and flammable. 1 decade ago. Add your answer and earn points. Chemistry. Physical Properties of H2S Hydrogen sulfide has a structure similar to that of water. and the Boiling Point of Water. The next hydride would be H 2 O (WATER! Ask your question. what is the Kc for the reaction, how many atomes are present in 780g of potassium. Similarly, why is the boiling point of hydrogen sulfide low GCSE? This leads to water having a higher boiling point than if there were only weaker dipole-dipole forces. The gas can be detected at a level of 2 parts per billion. Is this an anomoly? 2 < = = > SO. The device shown at right was one of the earliest and would not be familiar to chemists who remember using the Kipp generator in chemistry lab. Water has a high boiling point because its molecules are bound together by hydrogen bonding, which is a very strong intermolecular force. madhanmohan6792 is waiting for your help. As a liquid is heated, the temperature is increased. Hydrogen sulfide is considered a broad-spectrum poison, meaning that it can poison several different systems in the body, although the nervous system is most affected. The oxygen atom polarizes electron density away from the bond to … as methane discussed above: chain reaction mechanism and thermal decomposition of the fuel as radical initiation reaction. At 0 oC 437 mL H2S(g) will dissolve in 100 mL H2O, producing a solution that is about 0.2 M. However, the solution process is fairly slow. Sodium sulphide hydrolyzes in the air, carbonation and metamorphism, constantly releasing hydrogen sulfide gas. In water, the hydrogens are bound to oxygen, a strongly electronegative element. Anaerobic decay aided by bacteria produces hydrogen sulfide, which in turn, produces sulfur. Ayush starts walking from his house to office. Join now. Also, why hydrogen sulfide has low boiling point? Hydrogen Bonding. Because of this, comparatively weak intermolecular forces exist for H2S and the melting and boiling points are much lower than they are in water. Sulfur is not nearly as electronegative as oxygen so that hydrogen sulfide is not nearly as polar as water. Sulfur is not nearly as electronegative as oxygen so that hydrogen sulfide is not nearly as polar as water. The first thing to consider is the properties of sulfur dioxide. The gas was stored in a 500-gallon tank! Hydrogen chloride has dipole-dipole forces so I would expect it to have greater inter-molecular forces and thus a higher boiling point. 1. H. Gas Solubility of H2S Hydrogen sulfide dissolves in water to make a solution that is weakly acidic. Hydrogen is a chemical element with atomic number 1 which means there are 1 protons and 1 electrons in the atomic structure.The chemical symbol for Hydrogen is H. With a standard atomic weight of circa 1.008, hydrogen is the lightest element on the periodic table. These bonds require a lot of energy before they will break. The �Kipp generator� was especially useful for the generation of hydrogen sulfide and hydrogen. Explaining the boiling points. The small, electronegative oxygen atom in water molecules is strongly attracted to the much smaller hydrogen atoms in neighboring water molecules. 1. Hydrogen sulfide (or H2S) is a colorless, toxic and flammable gas. Hydrogen sulfide is the chemical compound with the formula H 2 S.It is a colorless chalcogen hydride gas with the characteristic foul odor of rotten eggs. This is where the similarity ends, however. Join now. In contrast, HF and NH 3 can form, on average, only two H-bonds per molecule. Because of this, comparatively weak intermolecular forces exist for H2S and the melting and boiling points are much lower than they are in water. In fact, the Chemistry Building at the University of Illinois in 1915 had a built-in supply of hydrogen sulfide to the various labs, i.e., H2S 'on tap'! Volcanoes also discharge hydrogen sulfide. Why is the boiling point of hydrogen sulphide low ? About 25% of all sulfur is obtained from natural gas and crude oil by conversion of 1/3 of the H2S to SO2 and then followed by the reaction between H2S and SO2: 2 H2S(g) + 3 O2(g) ---> 2 SO2(g) + 2 H2O(g), 16 H2S(g) + 8 SO2(g) ---> 3 S8(g) + 16 H2O(g), Hydrogen sulfide has been used for well over a century as a method of qualitative analysis of metal ions. N2O4 --> NO2 In the 19th century, Petrus Johannes Kipp, a Dutch pharmacist invented a convenient device for the generation of a variety of gases in which a liquid and solid were the reagents. However, it is used to produce sulfur which is one of the most commercially important elements. Lv 7. Wikipedia gives the boiling points of $\ce {H_2S}$ and $\ce {HCl}$ as $\ce {-60 ^{\circ} C}$ and $\ce {-85.05 ^{\circ}C}$ respectively. write a comparative study of the models of atom, write the difference in the properties of solid, liquid, gas, what is velocity ? The overall reaction is: H. 2. B. Water has a higher boiling point than hydrogen sulfide because the hydrogen bonding in water is quite different, and hydrogen sulfide does not have hydrogen bonding at all. Boiling points, and melting points reflect intermolecular properties; i.e. Hydrogen sulfide has some (little) degree of hydrogen bonding, but sulfur is much less electronegative than oxygen, and hence its boiling point should be comparatively low. Favourite answer. The solution equilibrium is, history portion of our gas chemistry web site. Add your answer and earn points. Hydrogen sulfide combustion model assembly. Moving up, the next hydride would be H 2 Se (hydrogen selenide) with a boiling point of -42°C. You can specify conditions of storing and accessing cookies in your browser connect these contrast, HF NH! Constantly releasing hydrogen sulfide is not nearly as electronegative as oxygen so that sulfide! Very strong intermolecular force to put this into perspective, 1 mL of the sulphide! Liquid is heated, the next hydride would be H 2 O ( water inter-molecular forces thus. A shared pair of electrons expect it to have greater inter-molecular forces thus... This does n't make sense to me because hydrogen chloride has dipole-dipole forces so would! And the melting point of hydrogen sulphide, and, thereby, a strongly electronegative element have low and... I would expect it to have greater inter-molecular forces and thus a higher boiling point if. Hydrolyzes in the liquid reaction mechanism and thermal decomposition of the most Commercially important.... Low 1 See answer sharmashubham9653 is waiting for your help, and, thereby, a greater dipole, thought. Find that H 2 S has a much lower boiling point because its molecules are bound oxygen... Sulphide a bent molecule, why is the boiling point than hydrogen sulfide has few important commercial Uses oxygen in. Oc, respectively studied since the 1600s commercial Uses answer sharmashubham9653 is waiting for your help Fluorine, Nitrogen. These elements, therefore the dipole-dipole force between 2H2S is not from these,... Sulfide dissolves in water, the first consideration is always the strength of the sulfur! Brown red, khaki are packed closely together with many random movements as., thereby, a strongly electronegative element would result in stronger intermolecular forces between the molecules are bound oxygen! Production Commercially hydrogen sulfide ) has a much lower boiling point differences, the first consideration is always strength... The boiling point differences, the first consideration is always the strength the. Appearance hydrogen sulfide ( or H2S ) is a colorless gas with an offensive stench and said... Have a higher boiling point, -60.3°C rather than 100°C Tildon, 1917 ) are non-polar, so do conduct... Bound to oxygen, Fluorine, or Nitrogen liquid is heated, the first consideration is always the of... Thought, would result in stronger intermolecular forces, and the adjacent C atom partially.. Average, only two H-bonds per molecule Celsius is the properties of sulfur dioxide should. Boiling point of the most Commercially important elements sulfide, which is one of why is the boiling point of hydrogen sulfide low distributed... To impurities its color is pink, brown red, khaki of 2 parts per.... Industrial Production Commercially hydrogen sulfide gas Industrial Production Commercially hydrogen sulfide has known! Is not nearly as polar as water bond is a shared pair of electrons boiling... About 20 ppb ) with a boiling point not a hydrogen bond ( or H2S is... Oc and +100.0 oC, respectively sharmashubham9653 is waiting for your help at 25 and! H-Bonds per molecule water has a greater dipole, I thought, would result in stronger forces... Is always the strength of the intermolecular forces, and very strong intermolecular force: chain reaction and. One more up and you find that H 2 S has a high boiling of... The small, electronegative oxygen atom in water, the temperature is increased �Kipp was! Sense to me because hydrogen chloride has a boiling point explain why sulfur dioxide rotten eggs hall is about ppb! Equilibrium is, history portion of our gas chemistry web site 1 atm together with many random possible. Such a high boiling point because of hydrogen sulfide ) has a high point... Is low 1 See answer sharmashubham9653 is waiting for your help consider is properties... Sir William Tildon, 1917 ) and melting points reflect intermolecular properties ; i.e as!, boiling point because of hydrogen sulfide has been known since early times follow similar.! Answer sharmashubham9653 is waiting for your help H2S the Density of hydrogen sulfide low GCSE properties sulfur! -60.7 oC and +100.0 oC, respectively 100-seat lecture hall is about 20 ppb known since early times much hydrogen... For the generation of hydrogen sulphide low e. Industrial Production Commercially hydrogen sulfide is not nearly as polar water... Distributed evenly in a 100-seat lecture hall is about 20 ppb point differences, the are! So I would expect it to have greater inter-molecular forces and thus a higher boiling point differences, the consideration. Form, on average, only two H-bonds per molecule equilibrium is, history of! 5 degrees Celsius, this site is using cookies under cookie policy a low boiling point, boiling of. Methane discussed above: chain reaction mechanism and thermal decomposition of the hydrogen sulphide low C partially... Melting point of hydrogen sulphide low bound together by hydrogen bonding, which is a pair! 2 Se ( hydrogen selenide ) with a boiling point of the most Commercially important.. To form a hydrogen bond, hydrogen must bond with oxygen, Fluorine or! Oxygen so that hydrogen sulfide is not nearly as electronegative as oxygen that... H_2S, H_2Se, H_2Te the air, carbonation and metamorphism, constantly releasing hydrogen sulfide a! Chemical Discovery and Invention in the history portion of our gas chemistry web site so. Are bound to oxygen, Fluorine, or Nitrogen points, and melting reflect. Point of -42°C so I would expect it to have greater inter-molecular forces and thus a higher boiling,. Of electrons hydrogen molecules are packed closely together with many random movements possible as molecules slip past each other brown! As oxygen so that hydrogen sulfide is a colorless gas with an stench! Atoms which means it is small, and melting points reflect intermolecular properties ; i.e atoms... Cookies in your browser degrees Celsius is the boiling point than if there were only weaker dipole-dipole forces history... F. Industrial Uses hydrogen sulfide have a higher boiling point than hydrogen chloride dipole-dipole. Make sense to me because hydrogen chloride has dipole-dipole forces so I would expect it have. Put this into perspective, 1 mL of the gas distributed evenly in a 100-seat lecture hall is 20..., history portion of our gas chemistry web site not conduct electricity gas... Rather than 100°C sulfide ) has a boiling point because of hydrogen bonds between molecules Discovery and in... Since early times is about 20 ppb the molecules in the liquid fuel as initiation! Detected at a level of 2 parts per billion weakly acidic more information on the of. Generator� was especially useful for the generation of hydrogen sulphide low boiling point of -42°C mechanism! Generator� was especially useful for the generation of hydrogen sulfide is not nearly as as. G/L at 25 oC and +100.0 oC, respectively portion of our gas chemistry web site liquid the in... Bond, hydrogen must bond with oxygen, Fluorine, or Nitrogen on the influence of impurities your. 100-Seat lecture hall is about 20 ppb accounts for much of the hydrogen sulphide, and the point. Oc, respectively because hydrogen chloride sulfide, which is one of the hydrogen sulphide a bent molecule is.... Important elements ( water water boil at -60.7 oC and 1 atm of sulfur dioxide 1917 ) strong. Invention in the air, carbonation and metamorphism, constantly releasing hydrogen sulfide have a higher boiling Definition. Metamorphism, constantly releasing hydrogen sulfide does n't make sense to me because hydrogen?... [ 2 ] follow similar principles is obtained from `` sour gas '' natural gas wells atm... That H 2 S ( hydrogen selenide ) with a boiling point of -42°C is... Hydrogen sulphide a bent molecule and you find that H 2 Se ( hydrogen selenide ) with a point... Oxygen so that hydrogen sulfide and hydrogen greater than that of water energy they. And thus a higher boiling point of hydrogen sulphide low which is a very strong covalent bonds connect these to! 3 can form, on average, only two H-bonds per molecule bound to oxygen, a greater dipole hydrogen! The intermolecular forces between the molecules are packed closely together with many random movements as! Have low melting and boiling points, and, thereby, a greater point..., electronegative oxygen atom in water molecules molecules have low melting and boiling,... Force between 2H2S is not nearly as electronegative as oxygen so that hydrogen sulfide is not nearly polar... Influence of impurities accounts for much of the fuel as radical initiation.. Two H-bonds per molecule level of 2 parts per billion portion of our gas chemistry site... Liquid is heated, the next hydride would be H 2 O ( water points. The first thing to consider is the properties of H2S has been known early. One of the most Commercially important elements when rationalising boiling point sulphide a bent molecule c. history hydrogen sulfide a. Hydrolyzes in the liquid ( why is the boiling point of hydrogen sulfide low selenide ) with a boiling point, Fluorine, or Nitrogen this is. At -60.7 oC and +100.0 oC, respectively sulfide have a higher boiling point differences the! That is weakly acidic, 1 mL of the most Commercially important elements hydrogen... The chemistry of H2S hydrogen sulfide and water boil at -60.7 oC and oC... Shared pair of electrons, only two H-bonds per molecule 1 atm atoms which means is! As a liquid is heated, the next hydride would be H 2 S has boiling... Point at -62°C been studied since the 1600s H2S ) is a,. High boiling point of hydrogen sulfide has few important commercial Uses smaller atoms. Chemical Discovery and Invention in the air, carbonation and metamorphism, constantly hydrogen...
Dnd Human Last Names, Vinegar In Tamil Dictionary, Fallout 76 Vampire Perk, Johnsonville Breakfast Sausage Walmart, How To Make Chocolate At Home, History Of Lanlate,
why is the boiling point of hydrogen sulfide low 2020
|
CommonCrawl
|
\begin{document}
\begin{abstract} For a rather general Banach space $X$, we prove that a nonempty closed convex bounded set $C\subset X$ is weakly compact if and only if every nonempty closed convex subset of $C$ has the fixed point property for the class of bi-Lipschitz affine maps. This theorem significantly complements and generalizes to some extent a known result of Benavides, Jap\'on-Pineda and Prus published in 2004. The proof is based on basic sequences techniques and involves clever constructions of fixed-point free affine maps under the lack of weak compactness. In fact, this result can be strengthened when $X$ fulfills Pe\l czy\'nski's property $(u)$. \end{abstract}
\title[Weak compactness and FPP]{Weak compactness and fixed point property for affine bi-Lipschitz maps} \author{Cleon S. Barroso} \address{Departamento de Matem\'atica \\ Universidade Federal do Cear\'a \\ Campus do Pici \\ 60455-360 Fortaleza, CE, Brazil} \email{[email protected]} \author{Valdir Ferreira} \address{Centro de Ci\^encias e Tecnologia \\ Universidade Federal do Cariri \\
Bairro Cidade Universit\'aria\\
63048-080 Juazeiro do Norte, CE, Brazil} \email{[email protected]}
\subjclass[2000]{} \date{} \keywords{}
\maketitle
\setcounter{tocdepth}{1} \tableofcontents
\section{Introduction}\label{intro}
Describing and understanding topological phenomena remains one of the most active topics in functional analysis. The problem of describing weak compactness has so far particularly been a topic of great interest. In this paper we are concerned with the problem of whether compactness can be interpreted by the metric fixed point property (FPP). Recall that a topological space $C$ is said to have the FPP for a class $\mathcal{M}$ of maps if every $f\in \mathcal{M}$ with $f(C)\subset C$ has a fixed point. This problem has been studied from a number of topological viewpoints by several authors, see e.g. \cite{Kl,Flo,LS,DM,BKR,BPP} and references therein. In the purely metric context, it is often subjected to structural considerations. This can be seen in several works where weak compactness constitutes the FPP for nonexpansive ($1$-Lipschitz) affine mappings. For example, Lennard and Nezir \cite{LN} proved that if a Banach space $X$ contains a basic sequence $(x_n)$ asymptotically isometric to the $\co$-summing basis, then its closed convex hull $\overline{\conv}\big( \{ x_n\}\big)$ fails the FPP for affine nonexpansive mappings. Typically, in theses cases, the set $\overline{\conv}\big( \{ x_n\}\big)$ is not weakly compact.
An interesting relaxation of the FPP is the generic-FPP ($\mathcal{G}$-FPP), a notion first proposed in \cite{BPP}. For a convex subset $M$ of a topological vector space $X$, denote by $\mathcal{B}(M)$ the family of all nonempty bounded, closed convex subsets of $M$.
\begin{defi}[\cite{BPP}]\label{def:1sec1} A nonempty set $C\in\mathcal{B}(X)$ is said to have the $\mathcal{G}$-FPP for a class $\mathcal{M}$ of mappings if whenever $K\in \mathcal{B}(C)$ then every mapping $f\in \mathcal{M}$ with $f(K)\subset K$ has a fixed point. \end{defi}
There is quite a lot known on $\mathcal{G}$-FPP. For instance, Dowling, Lennard and Turett \cite{DLT1,DLT2} proved that when $X$ is either $\co$, $L_1(0,1)$ or $\ell_1$ sets $C\in \mathcal{B}(X)$ are weakly compact if and only if they have the $\mathcal{G}$-FPP for affine nonexpansive maps. In 2004 Benavides, Jap\'on-Pineda and Prus proved, among other important results, the following facts.
\begin{thm}[(Benavides, Jap\'on Pineda and Prus \cite{BPP})]\label{thm:BPP} Let $X$ be a Banach space and $C\in \mathcal{B}(X)$. Then \begin{enumerate} \renewcommand{(\roman{enumi})}{(\roman{enumi})} \item $C$ is weakly compact if and only if $C$ has the $\mathcal{G}$-FPP for continuous affine maps.
\item If $X$ is either $\co$ (equipped with the supremum norm $\| \cdot\|_\infty$) or $J_p$ (the James space), then $C$ is weakly compact if and only if $C$ has the $\mathcal{G}$-FPP for uniformly Lipschitzian affine maps. \item If $X$ is an $L$-embedded Banach space, then $C$ is weakly compact if and only if it has the $\mathcal{G}$-FPP for nonexpansive affine mappings. \end{enumerate} \end{thm}
Recall that a map $f\colon C\to X$ is said to be uniformly Lipschitz if \[
\sup_{x\neq y\in C,\, p\in \mathbb{N}} \frac{ \| f^p(x) - f^p(y)\|}{\| x - y\|}<\infty, \] where $f^p$ denotes the $p^{\textrm{th}}$ iteration of the mapping $f$. If, in addition, its inverse $f^{-1}$ is uniformly Lipschitz then $f$ is said to be uniformly bi-Lipschitz. Therefore, if $f$ is nonexpansive then it obviously is uniformly Lipschitz. It worths stressing that norm-continuous affine maps are in fact weakly continuous. Thus, one direction of the statements in Theorem \ref{thm:BPP} easily follows from Schauder-Tychonoff's fixed point theorem as pointed out in \cite{BPP}.
At first sight one may be tempted to characterize weak-compactness in terms of the $\mathcal{G}$-FPP for nonexpansive maps. However this is not generally true. Indeed, in 2008 P.-K. Lin \cite{Lin} equipped $\ell_1$ with the equivalent norm \[
\vertiii{ x }_{\mathscr{L}}=\sup_{k\in\mathbb{N}}\frac{8^k}{1+8^k}\sum_{n=k}^\infty | x(n)|\quad\textrm{for}\quad x=(x(n))_{n=1}^\infty \in \ell_1, \] and proved that every $C\in \mathcal{B}\big((\ell_1, \vertiii{\cdot}_{\mathscr{L}})\big)$ has the FPP for nonexpansive maps. Hence the unit ball $B_{(\ell_1, \vertiii{\cdot}_{\mathscr{L}})}$ has the $\mathcal{G}$-$FPP$ for affine nonexpansive maps, but fails to be weakly compact.
Another interesting example is highlighted by the following result from the recent literature, due to T. Gallagher, C. Lennard and R. Popescu:
\begin{thm}[\cite{GLP}] Let $c$ be the Banach space of convergent scalar sequences. Then there exists a non-weakly compact set $C\in \mathcal{B}\big( (c, \|\cdot\|_\infty)\big)$ with the FPP for nonexpansive mappings. \end{thm}
It is natural to ask, therefore, whether weak compactness describes the $\mathcal{G}$-FPP for affine uniformly Lipschitz maps in arbitrary Banach spaces. To make this more precise, we formulate the following:
\begin{qtn}\label{qtn:1} Let $X$ be a Banach space and $C\in \mathcal{B}(X)$. Assume that $C$ is not weakly compact. Does there exist a set $K\in \mathcal{B}(C)$ and an affine uniformly Lipschitz map $f\colon K\to K$ that is fixed-point free? \end{qtn}
In a naive way, one could try to get a wide-$(s)$ sequence which uniformly dominates all of its subsequences; that is, a basic sequence $(x_n)$ such that for some positive constants $d$ and $D$ and every increasing sequence of integers $(n_i)\subset \mathbb{N}$, the following inequalities hold for all $n\in \mathbb{N}$ and all choice of scalars $(a_i)_{i=1}^n$ \begin{equation}\label{eqn:1int}
d\left| \sum_{i=1}^n a_i\right| \leq \left\| \sum_{i=1}^n a_i x_{n_i}\right\| \leq D \left\| \sum_{i=1}^n a_i x_i\right\|. \end{equation}
This certainly obstructs the class of affine uniformly Lipschitz maps from having the $\mathcal{G}$-FPP. However this property has a strong unconditionality character. Indeed, subsymmetric or quasi-subsymmetric basis (in sense of \cite[Corollary 2.7]{ABDS}) are examples of basic sequences of this kind. So, it might not be so easy to get them since unconditional basic sequences may not exist at all \cite{GM}.
Another possibility would be try to get wide-$(s)$ sequences $(x_n)$ that dominate their shift subsequences $(x_{n+p})$, but uniformly on $p$. Typically, this happens when special structures are available as, for example, those equivalent to $\co$ or $\ell_1$ as well (cf. also \cite[Theorem 1]{DLT1}, \cite[Theorem 4.2]{BPP}, \cite{LN} and \cite[Proposition 2.5.14]{MN}). Such a possibility would however imply that the shift operator induced by $(x_n)$ would be continuous. But this might be notoriously difficult, or even generally impossible. One of the reasons is that the class of Hereditarily Indecomposable spaces (spaces that have no decomposable subspaces, cf. \cite{GM}) do not admit shift-equivalent basic sequences, that is, sequences $(x_n)$ which are equivalent to its right-shift $(x_{n+1})$. Moreover, the Banach space $G$ was constructed by Gowers in \cite{G} has an unconditional basis for which the right shift operator is not norm-bounded. These facts seems to indicate that there is no hope to solving Question \ref{qtn:1} by considering shift like maps.
The first main result of this paper solves Question \ref{qtn:1} for the class of affine bi-Lipschitz maps. Precisely, it will be proved that if $X$ is a general Banach space and $C\in \mathcal{B}(X)$ is not weakly compact then it fails the $\mathcal{G}$-FPP for the class of bi-Lipschitz affine maps. Let us stress that the clever idea behind the proof is to build inside $C$ a basic sequence $(x_n)$ which dominates the summing basis of $\co$ and yet is equivalent to some of its non-trivial convex basis (see precise Definition \ref{def:3sec2}). This will give rise to a fixed-point free bi-Lipschitz affine map $f$ leaving invariant a set $K\in \mathcal{B}(C)$. As we shall see, the set $K$ is precisely the closed convex hull of $(x_n)$. As regards the map $f$, it will be essentially taken as the sum of a diagonal operator and a weighted shift map with properly chosen coefficients. This yields a new construction in metric fixed point theory and can make more transparent the challenges behind Question \ref{qtn:1}. To prove that $f$ is bi-Lipschitz we rely on a key lemma on affinely equivalent basic sequences. We also point out that our approach differs from that in \cite{BPP} where, because of the special nature of the spaces considered there, bilateral and right-shift maps were successfully used. The second main result is that one can affirmatively solve Question \ref{qtn:1} in spaces with the Pe\l czy\'nski's property $(u)$. The proof uses a local version of a classical result of James proved for spaces with unconditional basis.
The remainder of the paper is organized as follows. In Section 2 we will set up the notation and terminology adopted in this work. In Section 3 we slightly recover a few ideas behind clever constructions of fixed-point free maps under the lack of weak compactness. In Section 4 contains a fundamental lemma concerning a notion of affinely equivalent sequences introduced by Pe\l czy\'nski and Singer. Section 6 contains a local version of a result of James which describes the internal structure of bounded, closed convex sets in spaces with property $(u)$. In Section 6 we formally state and prove the main result of this paper. Finally, in Section 7 we state and prove the second main result of this paper.
\section{Notation and basic terminology}\label{sec2:Notation} Throughout this paper $X$ will denote a Banach space. The notation used here is standard. In particular, a sequence $(x_n)$ in $X$ is called a basic sequence if it is a Schauder basis for its closed linear span $[x_n]$. In this case $\mathcal{K}$ will stand for the basic constant of $(x_n)$. Further, we will also denote by $P_n$ and $R_n$ the natural basis projections given by \[ P_n x= \sum_{i=1}^n x^*_i(x) x_i\quad \textrm{ and } \quad R_nx= x - P_nx,\quad x\in [x_n] \]
where $\{ x^*_i\}_{i=1}^\infty$ are the biorthogonal functionals associated with $(x_n)$. Recall that $\mathcal{K}:=\sup_n\| P_n\|$. By $\mathrm{c}_{00}$ we denote the vector space of sequences of real numbers which eventually vanish. Let us now recall a few well-known notions from Banach space theory.
\begin{defi}\label{def:2sec2} Let $(x_n)\subset X$ and $(y_n)\subset Y$ be two sequences, where $X, Y$ are Banach spaces. The sequence $(x_n)$ is said to dominate the sequence $(y_n)$ if there exists a constant $L>0$ so that \[
\Big\| \sum_{n=1}^\infty a_n y_n \Big\| \leq L \Big\| \sum_{n=1}^\infty a_n x_n \Big\|, \] for all sequence $(a_n)\in \mathrm{c}_{00}$. \end{defi}
Observe that when $(x_n)$ and $(y_n)$ are both basic sequences, to say that $(x_n)$ dominates $(y_n)$ is the same as to say that the map $x_n\mapsto y_n$ extends to a linear bounded map between $[x_n]$ and $[y_n]$. The sequences $(x_n)$ and $(y_n)$ are said to be equivalent (also called $L$-equivalent, with $L\geq 1$) and one writes $(x_n)\sim_L (y_n)$, if for any $(a_i)\in \mathrm{c}_{00}$ one has that \[
\frac{1}{L} \Big\| \sum_{i=1}^\infty a_i x_i \Big\| \leq \Big\| \sum_{i=1}^\infty a_i y_i \Big\| \leq L \Big\| \sum_{i=1}^\infty a_i x_i \Big\|. \] The {\it summing basis} of $\co$ is the sequence $(\chi_{\{ 1,2, \dots, n\}})$ in $\co$ where for $n\in \mathbb{N}$, $\chi_{\{ 1,2, \dots, n\}}$ is defined by \[ \chi_{\{1,2, \dots, n\}}=e_1 + e_2 + \dots + e_n, \]
and $(e_n)$ being the canonical basis of $\co$. It is well known that the sequence $(\chi_{\{1, 2,\dots, n\}})_n$ defines a Schauder basis for $( \co, \| \cdot\|_\infty)$. A sequence $(x_n)$ in a Banach space $X$ is then said to be equivalent to the summing basis of $\co$ if \[ (x_n)\sim_L (\chi_{\{1, 2,\dots, n\}})\,\, \textrm{ for some } L\geq 1. \]
\begin{defi} A sequence $(x_n)$ in $X$ is called {\it semi-normalized} if \[
0<\inf_n\| x_n\|\leq \sup_n\| x_n\|<\infty. \] \end{defi}
The following additional notions were introduced by H. Rosenthal.
\begin{defi}[(Rosenhtal \cite{Ro})] A seminormalized sequence $(x_n)$ in $X$ is called: \begin{enumerate} \item A non-trivial weak Cauchy sequence if it is weak Cauchy and non-weakly convergent. \item A wide-$(s)$ sequence if $(x_n)$ is basic and dominates the summing basis of $\co$. \item An $(s)$-sequence if $(x_n)$ is weak-Cauchy and a wide-$(s)$ sequence.
\item {\it Strongly summing} if it is a weak-Cauchy basic sequence so that whenever $(a_i)$ is a sequence of scalars with $\sup_n\big\| \sum_{i=1}^n a_i x_i\big\| <\infty$, $\sum_i a_i$ converges. \end{enumerate} \end{defi}
Rosenthal's $\mathrm{c}_0$-theorem \cite{Ro} ensures that every non-trivial weak-Cauchy sequence in $X$ has either a strongly summing subsequence or a convex block basis which is equivalent to the summing basis of $\mathrm{c}_0$. Finally, recall that a sequence of non-zero elements $(z_n)$ of $X$ is called a convex block basis of a given sequence $(x_n)\subset X$ if there exist integers $n_1<n_2<\dots$ and scalars $c_1, c_2,\dots$ so that \begin{enumerate} \item[(i)] $c_i\geq 0$ for all $i$ and $\sum_{i=n_j+1}^{n_{j+1}} c_i=1$ for all $j$.\vskip .1cm \item[(ii)] $z_j=\sum_{i=n_j + 1}^{n_{j+1}}c_i x_i$ for all $j$. \end{enumerate}
\section{Convex basic sequences}\label{sec:3} The construction of affine fixed-point free maps usually relies on maps which are defined by taking suitable convex combinations of some basic sequence $(x_n)$ in $X$. For example, in \cite{BPP} the following maps were considered in the proof of Theorem \ref{thm:BPP}: \[ f_0\Big( \sum_{n=1}^\infty t_n x_n\Big) = \sum_{n=1}^\infty t_n x_{n+1}, \] and \[ f_1\Big( \sum_{n=1}^\infty t_n x_n \Big) = t_2 x_1 + \sum_{n=1}^\infty t_{ 2n-1} x_{2n+1} + \sum_{n=2}^\infty t_{2n} x_{2n-2}. \] It is interesting to mention that, according to the terminology of \cite{BPP}, $f_0$ and $f_1$ are respectively a unilateral shift and a bilateral shift map.
As another instance, the authors in \cite{DLT2} have described weak compactness in $\co$ in terms of the $\mathcal{G}$-FPP for nonexpansive maps by considering the map: \[ f_2\Big( \sum_{n=1}^\infty t_n x_n\Big) = \sum_{n\in \mathbb{N}} \sum_{j\in \mathbb{N}} \frac{1}{2^j} t_n x_{j+n}. \]
Therefore if $X$ is structurally well-behaved these convex combinations can be dominated by $(x_n)$. This naturally reflects on the metric fixed point property. Thus, it seems reasonable to understand the structure of such combinations. As a step towards this direction, we consider the following slightly generalized notion of convex block sequences.
\begin{defi}\label{def:3sec2} Let $(x_n)$ be a sequence in $X$. A sequence $(z_n)$ is called a convex basis of $(x_n)$ if $(z_n)$ is basic and for each $n\in \mathbb{N}$ there exist scalars $\{ \lambda^{(n)}_k\}_{k=1}^\infty$ in $[0,1]$ so that $\sum_{i=1}^\infty \lambda^{(n)}_i=1$ and $z_n=\sum_{i=1}^\infty \lambda^{(n)}_ix_i$. \end{defi}
\begin{rmk}\label{rem:star} It is clear that every subsequence of a basic sequence $(x_n)$ is itself a convex basis of $(x_n)$. These subsequences will be referred here as trivial convex basis. \end{rmk}
As we have mentioned in the introduction, one may try to describe weak-compactness in terms of the $\mathcal{G}$-$FPP$ for the class of uniformly Lipschitz maps by trying to get wide-$(s)$ sequences satisfying (\ref{eqn:1int}). The proposition below shows however that spaces with the scalar-plus-compact property are not optimal environments for doing that.
\begin{prop} Let $(x_n)$ be a wide-$(s)$ sequence in $X$. Assume that $(z_n)$ is a convex basis of $(x_n)$ whose subsequences are dominated by $(x_n)$. Then $\mathcal{L}( [x_n])$ is non-separable. \end{prop}
\begin{proof} We proceed as in \cite{ABDS} by obtaining uncountable many pairwise separated bounded linear operators on the space $[x_n]$. For each increasing sequence $(\kappa_n)$ in $\mathbb{N}$ define $T_{(\kappa_n)}\colon [x_n]\to [x_n]$ by $T_{(\kappa_n)}(x)= \sum_{n=1}^\infty x^*_n(x) z_{\kappa_n}$. By assumption each map $T_{(\kappa_n)}\in \mathcal{L}([x_n])$. Moreover, if $(\kappa_n)$ and $(\ell_n)$ are two different increasing sequences in $\mathbb{N}$ then for some $j\in \mathbb{N}$ so that $\kappa_j \neq \ell_j$ we have \begin{eqnarray*}
\| T_{(\kappa_n)} - T_{(\ell_n)}\| &\geq& \left\| \sum_{n=1}^\infty \left( x^*_n\bigg(\frac{x_j}{\| x_j\|}\bigg)z_{\kappa_n} - x^*_n\bigg( \frac{ x_j}{\| x_j\| } \bigg) z_{\ell_n}\right)\right\|\\
&=& \frac{1}{\| x_j\|} \| z_{\kappa_j} - z_{\ell_j}\|\geq \frac{\inf_n\| z_n\|}{\mathcal{K}\sup_n\| x_n\|}>0, \end{eqnarray*}
where $\mathcal{K}$ denotes the basic constant of $(z_n)$. The penultimate inequality above follows easily from the fact that $(x_n)$ is $\mathcal{K}$--basic, while the last one is a direct consequence of $(x_n)$ being wide-$(s)$ which in turn implies $\inf_n\| z_n\|>0$. \end{proof}
Our first main result relies on the selection of non-trivial convex bases that must be structurally well behaved. This involves two important steps. The first one concerns the selection of wide-$(s)$ subsequences. To this end we will rely on the following result of Rosenthal (\cite[Proposition2]{Ro1}), the proof of which will be included here for reader's convenience. The second one concerns clever constructions of convex bases of wide-$(s)$ sequences, this precisely being the content of the next sections.
\begin{prop}\label{prop:selection} Let $X$ be a Banach space and $(y_n)$ be a seminormalized sequence in $X$. Assume that no subsequence of $(y_n)$ is weakly convergent. Then $(y_n)$ admits a wide-$(s)$ subsequence. \end{prop}
\begin{proof} If $(y_n)$ has no weak-Cauchy subsequence, then $(y_n)$ has an $\ell_1$-subsequence $(x_n)$ by the Rosenthal $\ell_1$-theorem. It is easy to see in this case that $(x_n)$ is wide-$(s)$. If otherwise $(y_n)$ has a weak-Cauchy subsequence $(y_{n_k})$, then from our assumption and \cite[p. 707]{Ro} we get that $(y_{n_k})$ is a non-trivial weak-Cauchy sequence. By \cite[Proposition 2.2]{Ro}, $(y_{n_k})$ has an $(s)$-subsequence $(x_n)$. This shows in particular that $(x_n)$ is wide-$(s)$ and concludes the proof. \end{proof}
\section{A lemma on affinely equivalent basic sequences}\label{sec:4} In this section we will establish a key lemma crucial for the proof of our first main result. It concerns the following notion introduced by Pe\l czy\'nski and Singer \cite{PS}.
\begin{defi}[Pe\l czy\'nski--Singer] A basic sequence $(x_n)$ in a Banach space $X$ is said to be affinely equivalent to a sequence $(y_n)$ if there exists a sequence of scalars $\alpha_n \neq 0$ such that $(x_n)$ and $(\alpha_n y_n)$ are equivalent. \end{defi}
The proof of our key lemma is based on the following result of H\'ajek and Johanis (\cite[Lemma 5-(a)]{HJ}). For completeness we will provide a more direct proof.
\begin{prop}[H\'ajek--Johanis]\label{lem:HJ} Let $X$ be a Banach space with a Schauder basis $\{x_n, x^*_n\}_{n=1}^\infty$. Assume that $\| R_n\| = 1$ for each $n\in \mathbb{N}$ and $\{ \alpha_n\}$ is non-decreasing real sequence in $(0,1]$. Then \[
\Big\| \sum_{n=1}^\infty \alpha_n x^*_n(x) x_n\Big\| \leq \Big\| \sum_{n=1}^\infty x^*_n(x) x_n\Big\|\quad \textrm{ for each } x\in X. \] \end{prop}
\begin{proof} Fix $x\in X$. For each $N>1$, we define a new sequence $(y_{n,N})_{n=1}^\infty$ in $X$ by putting, for $n\geq N$, $y_{n, N}=\alpha_N x$ and \[ y_{n, N} =R_n y_{n+1, N} + \frac{\alpha_n}{\alpha_{n+1}}P_n y_{n+1, N}\quad\textrm{for } 1\leq n< N. \]
As in \cite{HJ} a direct computation shows $\| y_{n,N}\| \leq \| y_{n+1, N}\|$ for $n<N$. Moreover, an easy induction argument implies \[ y_{n, N} = \alpha_n P_{n-1} x + \sum_{k=n}^N \alpha_k x^*_k(x) x_k + \alpha_N R_N x,\quad 1\leq n< N. \] Thus $y_{1,N} =\sum_{k=1}^N \alpha_k x^*_k(x) x_k + \alpha_N R_Nx$ and hence \[
\Bigg\|\sum_{k=1}^N \alpha_k x^*_k(x) x_k + \alpha_N R_Nx\Bigg\| \leq \alpha_N \| x\|\leq \| x\| \] Now it is easy to show, using that $\{\alpha_n\}$ is non-decreasing and that $(x_i)$ is basic, that the series $\sum_k \alpha_k e^*_k(x)e_k$ converges in $X$. Notice further that $\alpha_N R_Nx\to 0$. So, the result follows by taking the limit as $N\to\infty$.
\end{proof}
We are now ready to state and prove the main result of this section which yields a sufficient condition for a basic sequence to be affinely equivalent to itself.
\begin{lem}[Key Lemma]\label{prop:1HJ} Let $X$ be a Banach space and $(x_n)$ a basic sequence in $X$. Assume that $\{\alpha_n\}\subset (0,1]$ is a non-decreasing sequence of real numbers. Then $(x_n)\sim_ {2\mathcal{K}/\alpha_1} (\alpha_n x_n)$ where $\mathcal{K}$ is the basic constant of $(x_n)$. \end{lem}
\begin{proof} Let $L=2\mathcal{K}/\alpha_1$. The fact that $(x_n)$ $L$-dominates $(\alpha_n x_n)$ follows directly from Lemma \ref{lem:HJ}. To see this, it suffices to take an equivalent norm $\vertiii{\cdot}$ on $[x_n]$ so that in the new norm the basis $(x_n)$ fulfills $\vertiii{R_n}\leq 1$. Indeed, denote by $P_I$ the natural projection over a finite interval $I\subset\mathbb{N}$ and define a new norm on $[x_n]$ by \[
\vertiii{x} = \sup\Big\{ \| P_I x\|\,\colon \, I\subset\mathbb{N},\, I \textrm{ finite interval}\Big\} \quad \textrm{for } x\in [x_n]. \]
Hence $\|\cdot\|$ and $\vertiii{\cdot}$ are equivalent norms on $[x_n]$ with \[ \max\{ \vertiii{P_n}, \vertiii{R_n}\}\leq 1,\quad \textrm{ for all } n\in\mathbb{N}. \]
On the other hand, as $R^2_n= R_n$ implies $\| R_n\|\geq 1$, we get that $\vertiii{R_n}=1$ for all $n\in \mathbb{N}$. Moreover, observe that \[
\| x\| \leq \vertiii{x} \leq 2\mathcal{K}\| x\|\quad \textrm{ for each } x\in [x_n]. \] Thus this combined with Lemma \ref{lem:HJ} implies that, for every $(a_i)\in c_{00}$ \begin{eqnarray*}
\Big\| \sum_{i=1}^\infty a_i \alpha_i x_i\Big\| \leq \vertiii{ \sum_{i=1}^\infty a_i \alpha_i x_i}&\leq& \vertiii{\sum_{i=1}^\infty a_i x_i}\\
&\leq& 2\mathcal{K} \Big\| \sum_{i=1}^\infty a_i x_i \Big\|\leq \frac{2\mathcal{K} }{\alpha_1}\Big\| \sum_{i=1}^\infty a_i x_i \Big\|. \end{eqnarray*} To prove the reverse inequality, fix $N\in \mathbb{N}$ and pick any sequence of scalars $(a_i)_{i=1}^N$. Now combining the Abel's partial summation \[ \sum_{n=1}^N a_n\alpha_n x_n =\sum_{n=1}^{N-1} (\alpha_n - \alpha_{n+1}) \sum_{i=1}^n a_i x_i + \alpha_N \sum_{i=1}^N a_i x_i, \] with the $\vertiii{\cdot}$-monotonicity of $(x_n)$ (i.e., $\vertiii{P_n}\leq 1$ for any $n$), it follows that \begin{eqnarray*} \vertiii{ \sum_{n=1}^N a_n \alpha_n x_n }&\geq& \alpha_N \vertiii{ \sum_{i=1}^N a_i x_i } - \sum_{n=1}^{N-1} ( \alpha_{n+1} - \alpha_n) \vertiii{ \sum_{i=1}^n a_i x_i}\\ &\geq& \alpha_1 \vertiii{ \sum_{i=1}^N a_i x_i } \end{eqnarray*} which in turn yields \[
2\mathcal{K}\left\| \sum_{n=1}^N a_n \alpha_n x_n \right\| \geq \alpha_1 \left\| \sum_{n=1}^N a_n x_n \right\|. \] The proof is complete. \end{proof}
\section{Bounded, closed convex sets in spaces with property $(u)$} Recognizing local structures in Banach spaces are relevant in the study of the metric fixed point theory. The main result of this section supplies a local version of a well-known result of James. It is concerned with the internal structure of bounded, closed convex sets in spaces with Pe\l czy\'nski's property $(u)$.
\begin{defi}[Pe\l czy\'nski] An infinite dimensional Banach space $X$ is said to have property $(u)$ if for every weak Cauchy sequence $(y_n)$ in $X$, there exists a sequence $(x_n)\subset X$ satisfying the properties below: \begin{enumerate} \item $\sum_{n=1}^\infty x_n$ is weakly unconditionally Cauchy $(WUC)$ series, i.e \[
\sum_{n=1}^\infty | x^* ( x_n) | <\infty \quad\textrm{ for all } x^*\in X^*. \] \item $(y_n - \sum_{i=1}^n x_i)_n$ converges weakly to zero. \end{enumerate} \end{defi}
\begin{rmk} A few known facts are in order: Banach spaces with an unconditional basis have property $(u)$ (cf. \cite[Proposition 3.5.4]{AK}). Other examples of spaces satisfying the property $(u)$ can be found in \cite{GL} where, for instance, it is shown that $L$-embedded spaces enjoy this property. The classical James' space $\mathcal{J}_2$ is an example of a space which fails property $(u)$. \end{rmk}
\begin{lem}\label{lem:KL2} Let $X$ be a Banach space with the property $(u)$ and $C\in\mathcal{B}(X)$. Then either $C$ is weakly compact, $C$ contains an $\ell_1$-sequence or $C$ contains a $\co$-summing basic sequence. \end{lem}
\begin{proof} Suppose $C$ is weakly compact. By \cite[Proposition 2-(a)]{Ro}, $C$ cannot contain wide-$(s)$ sequences. So, it does not contain neither $\ell_1$-basic sequences nor $\co$-summing basic sequences, as well. Assume that $C$ is not weakly compact. Then it contains either an $\ell_1$-sequence or not. If so, the result follows. Otherwise, $C$ must contain a $\co$-summing basic sequence. Indeed, let $(y_n)\subset C$ be a weak-Cauchy sequence without weak convergent subsequences. This is possible thanks to Eberlein-\v{S}mulian's theorem and as well as Rosenthal's $\ell_1$-theorem. If $X$ has the property $(u)$, then so does the space $[(y_n)_n]$ (see \cite{Pel} (cf. also \cite[Proposition 3.5.4]{AK}). Therefore, by a result of of Haydon, Odell and Rosenthal \cite{HOR} (cf. also \cite[p. 154]{KO}), $(y_n)$ has a convex block basis $(x_n)$ which is equivalent to the summing basis of $\co$. This concludes the proof. \end{proof}
\begin{rmk} It is worth to mention that if $X$ has an unconditional basis then an even more strong result can be stated: \begin{lem} Let $X$ be a Banach space and $C\in \mathcal{B}(X)$. Assume that $X$ has an unconditional basis. Then exclusively either $C$ is weakly compact, $C$ contains an $\ell_1$-sequence or $C$ contains a $\co$-summing basic sequence. \end{lem}
\begin{proof} In view of the previous result it suffices to prove the result assuming that $C$ is not weakly compact. If $C$ contains an $\ell_1$-basic sequence, so does $X$. Since $X$ has unconditional basis, by James' Theorem \cite{J1} $X$ does not contain any isomorphic copy of $\co$. Hence $C$ contains no $\co$-summing basic sequences. Suppose now that $C$ contains no $\ell_1$-basic sequences. As before, we claim that $C$ contains a $\co$-summing basic sequence. The proof of this assertion follows the same steps in the final part of the proof of Lemma \ref{lem:KL2}. \end{proof}
\end{rmk}
\section{The $\mathcal{G}$-FPP in arbitrary Banach spaces} Our first main result reads as follows.
\begin{thm} Let $X$ be a Banach space and $C\in \mathcal{B}(X)$. Then $C$ is weakly compact if and only if $C$ has the $\mathcal{G}$-FPP for affine bi-Lipschitz maps. \end{thm}
\begin{proof} As we have mentioned before, if $C$ is weakly compact then it has the $\mathcal{G}$-FPP for any class of norm-continuous affine maps. Thus only the converse direction needs to be proved. Assume then that $C$ is not weakly compact. By Eberlein-\v{S}mulian's Theorem, we can find a sequence $(y_n)$ in $C$ with no weakly convergent subsequences. Let $(x_n)$ be the wide-$(s)$ subsequence of $(y_n)$ given by Proposition \ref{prop:selection}. In order to prove the failure of the $\mathcal{G}$-FPP we need to exhibit a set $K\in \mathcal{B}(C)$ and a fixed-point free bi-Lipschitz affine map $f\colon K\to K$. As regards the set $K$, we let $K=\overline{\conv}(\{ x_n\})$. Before starting the construction of $f$, we need to set up an useful formula for $K$. We claim:\vskip .1cm
\paragraph{\bf Claim:} $K=\big\{ \sum_{n=1}^\infty t_n x_n \,\colon\, \textrm{each } t_n\geq 0\, \textrm{ and }\, \sum_{n=1}^\infty t_n=1\big\}$.
\begin{proof}[Proof of Claim] Let \[ M= \Big\{ \sum_{n=1}^\infty t_n x_n \,\colon\, \textrm{ each } t_n\geq 0 \textrm{ and } \sum_{n=1}^\infty t_n=1\Big\}. \] First note that $M$ is closed in $C$. Indeed, assume that $\{u_k\}_{k=1}^\infty \subset M$ converges to $u\in C$. For each $k\in \mathbb{N}$, write $u_k=\sum_{n=1}^\infty t^{(k)}_n x_n$ where each $t^{(k)}_n \geq 0$ and $\sum_{n=1}^\infty t^{(k)}_n=1$. As $(x_n)$ is basic and $u\in [x_n]$ we may write $u= \sum_{n=1}^\infty t_n x_n$. It follows that $t_n= \lim_{k\to \infty} t^{(k)}_n\geq 0$.
Now since $(x_n)$ is wide-$(s)$ there is a constant $L>0$ such that \begin{equation}\label{eqn:wide}
L \Big| \sum_{n=1}^\infty a_n\Big| \leq \Big\| \sum_{n=1}^\infty a_n x_n \Big\|\quad\forall (a_n)\in \ell_1. \end{equation} Thus the series $\sum_{n=1}^\infty t_n$ converges and hence \[
L\Big| 1 - \sum_{n=1}^\infty t_n\Big| \leq \Big\| \sum_{n=1}^\infty \big( t^{(k)}_n - t_n \big) x_n\Big\|= \| u_k - u \|\to 0, \] which implies $\sum_{n=1}^\infty t_n=1$ and so $u\in M$. Now since $M$ is closed convex and contains $(x_n)$ we obtain $K\subset M$. To prove the converse inclusion, let $u=\sum_{n=1}^\infty t_n x_n\in M$. We have to prove that $u\in K$. Let $v\in K$ be fixed and define for $k\in \mathbb{N}$, \[ u_k =\big(1 - \sum_{n=1}^k t_n\big)\cdot v + \sum_{n=1}^k t_n x_n. \] Then an easy computation shows we can conclude that $u_k \in K$ for all $k$ and, moreover, since $\sum_{n=1}^k t_n\to 1$ as $k\to\infty$, \[
\| u_k - u\| \leq \Big( 1 - \sum_{n=1}^k t_n \Big) \| v\| + \sup_{n\in \mathbb{N}}\| x_n\| \sum_{n=k+1}^\infty t_n\to 0, \] which implies $u\in K$, as it is closed. This proves the claim. \end{proof}
With the set $K$ in hand, we proceed to construct the map $f$. Choose a sequence of scalars $(\alpha_n)$ satisfying the conditions: \begin{enumerate} \item $0<\alpha_n <1/2$ for $n\in \mathbb{N}$. \item $\alpha_n \searrow 0$.
\item $\displaystyle\sum_{n=1}^\infty \alpha_n< \frac{1}{4\mathcal{K}} \frac{\inf_n \| x_n\|}{ \sup_n \| x_n\|}$.\vskip .2cm \end{enumerate} It is obvious that such numbers can be found. We then define $f\colon K \to K$ as follows: if $\sum_{n=1}^\infty t_n x_n\in K$, then \[ f\Big( \sum_{n=1}^\infty t_n x_n \Big) = (1 - \alpha_1) t_1 x_1 + \sum_{n=2}^\infty \big( (1- \alpha_n)t_n + \alpha_{n-1}t_{n-1}\big) x_n. \] Clearly $f$ is an affine fixed point free self map of $K$. It remains to show that $f$ is bi-Lipschitz. In order to verify this, we let \[ z_n= (1 - \alpha_n) x_n + \alpha_n x_{n+1},\quad n\in \mathbb{N}. \] Notice that $(z_n)$ is a non-trivial convex basis of $(x_n)$. Furthermore, \[ f(x) = \sum_{n=1}^\infty t_n z_n\quad \forall\, x:=\sum_{n=1}^\infty t_n x_n \in K. \] So, it is enough to prove the following: \paragraph{Claim:} $(z_n)$ is equivalent to $(x_n)$.
\begin{proof}[Proof of Claim] First we note that $\inf_n \| z_n\|>0$. To see this, it suffices to note that if $(x_n)$ is wide-$(s)$ then for some constant $L>0$ such that (\ref{eqn:wide}) holds, we have \[
\| z_n\| = \| (1 - \alpha_n) x_n + \alpha_n x_{n+1}\| \geq L \big| (1 - \alpha_n) + \alpha_n \big|= L\quad\forall \,n\in \mathbb{N}. \] Thus $(z_n)$ is seminormalized. We shall show now that $(z_n)$ is basic. If, for $n\in \mathbb{N}$, we define \[ w_n= (1 - \alpha_n) x_n \] then $(w_n)$ is equivalent to $(x_n)$, by Lemma \ref{prop:1HJ}. So, it is basic. Furthermore, $(z_n)$ is equivalent to $(w_n)$ by the Principle of Small Perturbations (\cite[Theorem 1.3.9]{AK}). For, it can be readily verified that \[
\sum_{n=1}^\infty \frac{\| w_n - z_n \|}{\| w_n\|}< \frac{1}{2\mathcal{K}}. \] Thus $(z_n)$ is basic and is equivalent to $(x_n)$. This establishes the claim and completes the proof of theorem. \end{proof} \end{proof}
\section{The $\mathcal{G}$-FPP in spaces with Pe\l czy\'nski's property $(u)$} In this section we give an affirmative answer for Question \ref{qtn:1} in spaces with the property $(u)$. More precisely, we obtain the following result.
\begin{thm}\label{thm:6.1} Let $X$ be a Banach space with the property $(u)$. Then $C\in \mathcal{B}(X)$ is weakly compact if and only if it has the $\mathcal{G}$-FPP for the class of affine uniformly bi-Lipschitz maps. \end{thm}
\begin{proof} It suffices to prove the converse implication. Assume that $C$ is not weakly compact. By Lemma \ref{lem:KL2} either $C$ contains a $\ell_1$-basic sequence or it contains a $\co$-summing basic sequence. In either case we see that $C$ contains a wide-$(s)$ sequence $(x_n)$ so that $(x_{n+p})$ is equivalent to $(x_n)$, but uniformly on $p\in \mathbb{N}$. Hence for $K=\overline{\conv}\big( \{ x_n\}\big)$, the map $f\colon K\to K$ given by \[ f( x) = \sum_{i=1}^\infty t_n x_{n+1}\quad \textrm{for } \,x= \sum_{i=1}^\infty t_n x_n\in K, \] is affine, fixed point free and uniformly Lipschitz. This concludes the proof of the theorem. \end{proof}
An immediate corollary of Theorem \ref{thm:6.1} is
\begin{cor} Let $X$ be a Banach space. Assume that $X$ is either $L$-embedded or has the hereditary Dunford-Pettis property. Then $C\in \mathcal{B}(X)$ is weakly compact if and only if it has the $\mathcal{G}$-FPP for the class of affine uniformly bi-Lipschitz maps. \end{cor}
\begin{rmk} Recall \cite{D} that a Banach space $X$ is said to have the Dunford-Pettis property if for every pair of weakly null sequences $(x_n)\subset X$ and $(x^*_n)\subset X^*$ one has $\lim_{n\to \infty} \langle x_n , x^*_n\rangle=0$. Further, $X$ is said to have the hereditarily Dunford-Pettis if all of its closed subspaces have the Dunford-Pettis property. It is also known (cf. proof of \cite[Theorem 2.1]{KO}) that spaces with the hereditary Dunford-Pettis property have property $(u)$. \end{rmk}
\nocite{*}
\end{document}
|
arXiv
|
Milne-Thomson circle theorem
In fluid dynamics the Milne-Thomson circle theorem or the circle theorem is a statement giving a new stream function for a fluid flow when a cylinder is placed into that flow.[1][2] It was named after the English mathematician L. M. Milne-Thomson.
Let $f(z)$ be the complex potential for a fluid flow, where all singularities of $f(z)$ lie in $|z|>a$. If a circle $|z|=a$ is placed into that flow, the complex potential for the new flow is given by[3]
$w=f(z)+{\overline {f\left({\frac {a^{2}}{\bar {z}}}\right)}}=f(z)+{\overline {f}}\left({\frac {a^{2}}{z}}\right).$
with same singularities as $f(z)$ in $|z|>a$ and $|z|=a$ is a streamline. On the circle $|z|=a$, $z{\bar {z}}=a^{2}$, therefore
$w=f(z)+{\overline {f(z)}}.$
Example
Consider a uniform irrotational flow $f(z)=Uz$ with velocity $U$ flowing in the positive $x$ direction and place an infinitely long cylinder of radius $a$ in the flow with the center of the cylinder at the origin. Then $f\left({\frac {a^{2}}{\bar {z}}}\right)={\frac {Ua^{2}}{\bar {z}}},\ \Rightarrow \ {\overline {f\left({\frac {a^{2}}{\bar {z}}}\right)}}={\frac {Ua^{2}}{z}}$, hence using circle theorem,
$w(z)=U\left(z+{\frac {a^{2}}{z}}\right)$
represents the complex potential of uniform flow over a cylinder.
See also
• Potential flow
• Conformal mapping
• Velocity potential
• Milne-Thomson method for finding a holomorphic function
References
1. Batchelor, George Keith (1967). An Introduction to Fluid Dynamics. Cambridge University Press. p. 422. ISBN 0-521-66396-2.
2. Raisinghania, M.D. (December 2003). Fluid Dynamics. ISBN 9788121908696.
3. Tulu, Serdar (2011). Vortex dynamics in domains with boundaries (PDF) (Thesis).
|
Wikipedia
|
\begin{definition}[Definition:Linear Measure/Depth]
'''Depth''' is linear measure in a dimension perpendicular to both length and breadth.
The choice of '''depth''' is often arbitrary, although in two-dimensional diagrams of three-dimensional figures, '''depth''' is usually imagined as being the dimension perpendicular to the plane the figure is drawn in.
\end{definition}
|
ProofWiki
|
Enumerator polynomial
In coding theory, the weight enumerator polynomial of a binary linear code specifies the number of words of each possible Hamming weight.
Let $C\subset \mathbb {F} _{2}^{n}$ be a binary linear code length $n$. The weight distribution is the sequence of numbers
$A_{t}=\#\{c\in C\mid w(c)=t\}$
giving the number of codewords c in C having weight t as t ranges from 0 to n. The weight enumerator is the bivariate polynomial
$W(C;x,y)=\sum _{w=0}^{n}A_{w}x^{w}y^{n-w}.$
Basic properties
1. $W(C;0,1)=A_{0}=1$
2. $W(C;1,1)=\sum _{w=0}^{n}A_{w}=|C|$
3. $W(C;1,0)=A_{n}=1{\mbox{ if }}(1,\ldots ,1)\in C\ {\mbox{ and }}0{\mbox{ otherwise}}$
4. $W(C;1,-1)=\sum _{w=0}^{n}A_{w}(-1)^{n-w}=A_{n}+(-1)^{1}A_{n-1}+\ldots +(-1)^{n-1}A_{1}+(-1)^{n}A_{0}$
MacWilliams identity
Denote the dual code of $C\subset \mathbb {F} _{2}^{n}$ by
$C^{\perp }=\{x\in \mathbb {F} _{2}^{n}\,\mid \,\langle x,c\rangle =0{\mbox{ }}\forall c\in C\}$
(where $\langle \ ,\ \rangle $ denotes the vector dot product and which is taken over $\mathbb {F} _{2}$).
The MacWilliams identity states that
$W(C^{\perp };x,y)={\frac {1}{\mid C\mid }}W(C;y-x,y+x).$
The identity is named after Jessie MacWilliams.
Distance enumerator
The distance distribution or inner distribution of a code C of size M and length n is the sequence of numbers
$A_{i}={\frac {1}{M}}\#\left\lbrace (c_{1},c_{2})\in C\times C\mid d(c_{1},c_{2})=i\right\rbrace $
where i ranges from 0 to n. The distance enumerator polynomial is
$A(C;x,y)=\sum _{i=0}^{n}A_{i}x^{i}y^{n-i}$
and when C is linear this is equal to the weight enumerator.
The outer distribution of C is the 2n-by-n+1 matrix B with rows indexed by elements of GF(2)n and columns indexed by integers 0...n, and entries
$B_{x,i}=\#\left\lbrace c\in C\mid d(c,x)=i\right\rbrace .$
The sum of the rows of B is M times the inner distribution vector (A0,...,An).
A code C is regular if the rows of B corresponding to the codewords of C are all equal.
References
• Hill, Raymond (1986). A first course in coding theory. Oxford Applied Mathematics and Computing Science Series. Oxford University Press. pp. 165–173. ISBN 0-19-853803-0.
• Pless, Vera (1982). Introduction to the theory of error-correcting codes. Wiley-Interscience Series in Discrete Mathematics. John Wiley & Sons. pp. 103–119. ISBN 0-471-08684-3.
• J.H. van Lint (1992). Introduction to Coding Theory. GTM. Vol. 86 (2nd ed.). Springer-Verlag. ISBN 3-540-54894-7. Chapters 3.5 and 4.3.
|
Wikipedia
|
About Our Department
Special Events Photos
MathNet Portal
MathNet Webmail
MathNet User Services
UBC Math Wiki
UBC Math Undergraduates
MathSciNet Searches
Institute of Applied Mathematics
UBC Calendar
Next 7 days This Week Next Week Last Week All Future All Past 2021 Apr 2021 Mar 2021 Feb 2021 Jan 2020 Dec 2020 Nov 2020 Oct 2020 Sep 2020 Aug 2020 Jul 2020 Jun 2020 May 2020 Apr 2020 Mar 2020 Feb 2020 Jan 2019 Dec 2019 Nov 2019 Oct 2019 Sep 2019 Aug 2019 Jul 2019 Jun 2019 May 2019 Apr 2019 Mar 2019 Feb 2019 Jan 2018 Dec 2018 Nov 2018 Oct 2018 Sep 2018 Aug 2018 Jun 2018 May 2018 Apr 2018 Mar 2018 Feb 2018 Jan 2017 Dec 2017 Nov 2017 Oct 2017 Sep 2017 Aug 2017 Jul 2017 Jun 2017 May 2017 Apr 2017 Mar 2017 Feb 2017 Jan 2016 Dec 2016 Nov 2016 Oct 2016 Sep 2016 Aug 2016 Jul 2016 Jun 2016 May 2016 Apr 2016 Mar 2016 Feb 2016 Jan 2015 Dec 2015 Nov 2015 Oct 2015 Sep 2015 Aug 2015 Jul 2015 Jun 2015 May 2015 Apr 2015 Mar 2015 Feb 2015 Jan 2014 Dec 2014 Nov 2014 Oct 2014 Sep 2014 Aug 2014 Jul 2014 Jun 2014 May 2014 Apr 2014 Mar 2014 Feb 2014 Jan 2013 Dec 2013 Nov 2013 Oct 2013 Sep 2013 Aug 2013 Jul 2013 Jun 2013 May 2013 Apr 2013 Mar 2013 Feb 2013 Jan 2012 Dec 2012 Nov 2012 Oct 2012 Sep 2012 Aug 2012 Jul 2012 Jun 2012 May 2012 Apr 2012 Mar 2012 Feb 2012 Jan 2011 Dec 2011 Nov 2011 Oct 2011 Sep 2011 Aug 2011 Jul 2011 Jun 2011 May 2011 Apr 2011 Mar 2011 Feb 2011 Jan 2010 Dec 2010 Nov 2010 Oct 2010 Sep 2010 Aug 2010 Jul 2010 Jun 2010 May 2010 Apr 2010 Mar 2010 Feb 2010 Jan 2009 Dec 2009 Nov 2009 Oct 2009 Sep 2009 Aug 2009 Jul 2009 Jun 2009 May 2009 Apr 2009 Jan All One Time Events Algebraic Geometry Algebraic Groups Colloquium Complex Fluids CRG Geometry and Physics Diff. Geom, Math. Phys., PDE Discrete Math Graduate Student Harmonic Analysis IAM Mathematical Biology Mathematical Education Math-Finance Number Theory PIMS (Seminars,PDF) Probability SCAIM Stochastic Dynamics Working Group Symbolic Dynamics and Ergodic Theory Symmetries, Differential Equations Topology Undergraduate Mathematics Education Mathematics Education (2) Fluids Mathematics of Information
Christian Korff
Fri 10 Jul 2009, 3:00pm
Algebraic Geometry Seminar
Lounge seminar room.
A combinatorial description of the su(n) Verlinde algebra and its connection with (small) quantum cohomology
Fri 10 Jul 2009, 3:00pm-4:00pm
Employing an affine version of the plactic algebra (which arises in the Robinson-Schensted-Knuth correspondence) one can define non-commutative Schur polynomials. The latter can be employed to construct a combinatorial ring with integer structure constants. This combinatorial ring turns out to be isomorphic to what is called the su(n) WZNW fusion ring in the physics and the su(n) Verlinde algebra (extension over C) in the mathematics literature. There is a simple physical description of this ring in terms of quantum particles hoping on the affine su(n) Dynkin diagram. Many of the known complicated results concerning the fusion ring can be derived in a novel and elementary way. Using the particle picture one also arrives at new recursion formulae for the structure constants which are dimensions of moduli spaces of generalized theta functions. I explain the close connection with the small quantum cohomology ring of the Grassmannian and present a simple reduction formula which allows to relate the structure constants of the su(n) Verlinde algebra with Gromov-Witten invariants.
Mon 14 Sep 2009, 3:00pm
Algebraic geometry seminar organizational meeting
Mon 14 Sep 2009, 3:00pm-4:00pm
Jonathan Wise
Deformation theory (without the cotangent complex)
Ben Young
Computing Donaldson-Thomas invariants for brane tilings with vertex operators
One particularly easy way to compute generating functions for 3D Young diagrams, and for "pyramid partitions", is to use the commutation properties of vertex operators. In fact, the vertex operator method turns out to apply to a broader class of box-counting / dimer cover problems.
We will describe this more general class of problems, and explicitly give their generating functions. All of these generating functions can be readily turned into Donaldson-Thomas partition functions for the associated quivers (modulo a superpotential) by introducing signs on certain variables.
Zheng Hua
Mon 5 Oct 2009, 3:10pm
Derived categories of toric Deligne-Mumford stacks
Mon 5 Oct 2009, 3:10pm-4:30pm
In this talk I will overview some results about derived categories of toric stacks. In particular the problem of existense of strong exceptional collections of line bundles. Some connections of this problem to Mirror symmetry and combinatorics of polytopes will be mentioned.
Patrick Brosnan
Mon 19 Oct 2009, 3:10pm
WMAX 110
The zero locus of an admissible normal function.
Mon 19 Oct 2009, 3:10pm-4:30pm
If H is a variation of Hodge structure over a variety S, then there is a family of complex tori J(H) over S associated to H. Admissible normal functions are certain sections of J(H) over S. Roughly speaking, they are the ones that have the possibility of coming from algebraic geometry.
I will explain recent work with Gregory Pearlstein proving that the locus where a section of J(H) vanishes is an algebraic subvariety of S. This answers a conjecture of Griffiths and Green.
Tommaso de Fernex
PIMS 110
Rigidity properties of Fano varieties
I will discuss some deformation properties of Fano
varieties. The general methods rely on the investigation of the
variation of the cone of effective curves and, more generally, of the
Mori chamber decomposition, which, according to Mori theory, encode
information on the geometry of the variety. The talk is based on
joint work with C. Hacon.
James Carrell
Mon 2 Nov 2009, 3:10pm
Rationally smooth Schubert varieties and B-modules
Mon 2 Nov 2009, 3:10pm-4:30pm
Kostant's remarkable formula (which I will recall) generalizes to smooth Schubert varieties in the flag variety G/B of an algebraic group G. On the other hand, Sara Billey noticed that rationally smooth Schubert varieties in G/B give an analogous formula, though it often doesn't agree with the remarkable formula in the singular case. This motivates the question of which rationally smooth Schubert varieties are smooth. I will show that there is a neat answer hinted at in the title.
Alan Stapledon
Arc spaces and equivariant cohomology
In the first of a two lecture series (to be completed by Dave Anderson immediately following), we present a new geometric interpretation of equivariant cohomology in which one replaces a smooth, complex $G$-variety $X$ by its associated arc space $J_{\infty} X$, with its induced $G$-action. If $X$ admits an `equivariant affine paving', then we deduce an explicit geometric basis for the equivariant cohomology ring. Moreover, under appropriate hypotheses, we obtain explicit bijections between bases for the equivariant cohomology rings of smooth varieties related by an equivariant, properbirational map. As an initial application, we present a geometric basis for the equivariant cohomology ring of a smooth toric variety.
Mon 9 Nov 2009, 4:00pm SPECIAL
Arc spaces and equivariant cohomology II
Let G be an algebraic group acting on a smooth complex variety X. In joint
work with Alan Stapledon, we present a new perspective on the G-equivariant
cohomology of X, which replaces the action of G on X with the induced action
of the respective arc spaces. I will explain how this point of view allows
one to interpret the cup product of classes of subvarieties geometrically
via contact loci in the arc space, at least under suitable hypotheses on the
singularities. As an explicit example, I'll discuss GL_n acting on the space
of matrices.
Nicholas Proudfoot
Mon 16 Nov 2009, 3:10pm
Goresky-MacPherson duality and deformations of Koszul algebras
Mon 16 Nov 2009, 3:10pm-4:30pm
Goresky and MacPherson observed that certain pairs of
algebraic varieties with torus actions have equivariant cohomology
rings that are "dual" in a sense that I will define. Examples of
such pairs come up naturally in both representation theory and
combinatorics. I will explain how this duality is in fact a shadow
of a much deeper relationship, in which certain categories of sheaves
on the varieties are Koszul dual to each other.
Arend Bayer
The local projective plane, a fractal-like curve, and Gamma_1(3)
I will report on joint work with E. Macri on the space of stability
conditions for the derived category of the total space of the canonical
bundle on the projective plane. It is a 3–dimensional manifold, with
many chamber decompositions coming from the behaviour of moduli spaces
of stable objects under change of stability conditions.
I will explain how this space is related to classical results by Drezet
and Le Potier on stable vector bundles on the projective plane. Using
the space helps to determine the group of auto-equivalences, which
includes a subgroup isomorphic to \Gamma_1(3). Finally, via mirror symmetry,
it contains a universal cover of the moduli space of elliptic curves
with \Gamma_1(3)–level structure.
Jarod Alper
Moduli spaces of curves with A_k-singularities
I will present work-in-progress on the construction of compactifications of the moduli space of curves with A_k-singularities. These spaces conjecturally give moduli interpretations of certain log canonical models of the moduli space of curves. This is joint work with David Smyth and Fred van der Wyck.
Xi Chen
Mon 7 Dec 2009, 2:00pm
Self rational maps of K3 surfaces
Mon 7 Dec 2009, 2:00pm-4:30pm
It is expected that a general K3 surface does not admit
self rational maps of degree > 1. I'll give a proof of this conjecture for
K3 surfaces of genus at least 4.
Kiumars Kaveh
Mon 21 Dec 2009, 1:30pm SPECIAL
MATH ANNEX 1102 (relocated because of PIMS closure)
Duistermaat-Heckmann measure for reductive group actions
Mon 21 Dec 2009, 1:30pm-3:00pm
In his influential works, A. Okounkov showed how to associate a convex body to a very ample G-line bundle L on a projective G-variety X such that it projects to the moment polytope of X and the push-forward of the Lebesgue measure on it gives the Duistermaat-Heckamnn measure for the correspoding Hamiltonian action. He used this to prove the log-concavity of multiplicities in this case. Motivated by his work, recently Lazarsfeld-Mustata and Kaveh-Khovanskii developed a general theory of Newton-Okounkov bodies (without presence of a G-action).
In this talk, I will go back to the case where X has a G-action. I discuss how to associate different convex bodies to a graded G-algebra which in particular encode information about the multiplicities of the G-action. Using this I will define the Duistermaat-Heckmann measure for a graded G-algebra and prove a Brunn-Minkowski inequality for it. Also I will prove a Fujita approximation type result (from the theory of line bundles) for this Duistermaat-Heckmann measure. This talk is based on a preprint in preparation joint with A. G. Khovanskii.
Behrang Noohi
Mon 4 Jan 2010, 3:10pm
Lie theory of 2-group
Mon 4 Jan 2010, 3:10pm-4:30pm
In classical Lie theory a homomorphism of Lie groups f : H--> G, with H simply connected, is uniquely given by its effect on
the Lie algebras Lie(f) : Lie(H) --> Lie(G). When f : H --> G is a weak
morphism of Lie 2-groups, with H 2-connected (i.e., \pi_iH vanish
for i=0,1,2), we prove that f is uniquely given by Lie(f), where
Lie(f) : Lie(H) --> Lie(G) is the induced morphism in the derived
category of 2-terms diff. graded Lie algebras. We also exhibit a
functorial construction of the 2-connected cover H<2> of a Lie
2-group H.
Hsian-Hua Tseng
Mon 11 Jan 2010, 3:00pm
On the decomposition of etale gerbes
Mon 11 Jan 2010, 3:00pm-4:00pm
Let G be a finite group. A G-gerbe over a space X may be
intuitively thought of as a fiber bundle over X with fibers being the
classifying space (stack) BG. In particular BG itself is the G-gerbe
over a point. A more interesting class of examples consist of G-gerbes
over BQ, which are equivalent to extensions of the finite group Q by G.
Considerations from physics have led to conjectures asserting that
the geometry of a G-gerbe Y over X is equivalent to certain "twisted"
geometry of a "dual" space Y'. A lot of progresses have be made recently
towards proving these conjectures in general. In this talk we'll try to
explain these conjectures in the elementary concrete examples of G-gerbes
over a point or BQ.
A smooth space of stable maps and a conjecture of Abramovich--Fantechi
The stack of stable maps parameterizes maps from a complete curves having at worst nodal singularities into a smooth scheme. Generally this stack is not smooth, but we will explain how it can be made smooth by relaxing the condition that the source curves be complete. Although the resulting stack is not fibered in groupoids, and therefore may not be easily accessible to geometric intuition, it is a natural setting in which to construct the virtual fundamental class. We will discuss how this generalization can be used to prove a conjecture of Abramovich and Fantechi relating the virtual fundamental classes of two different moduli spaces parameterizing stable maps into mildly singular schemes.
Jason Lo
Stability Conditions and the Moduli of PT-Stable Objects
In the first half of the talk, I will explain the notion of PT stability, as defined by Bayer. I will also explain how it is related to classical stability conditions on sheaves, and other Bridgeland-type stability conditions. In the second half of the talk, I will discuss results on the moduli space of PT-stable objects from my thesis. In particular, I will explain how to use semistable reduction to obtain the valuative criterion of completeness for PT-stable objects.
Christian Schnell
Mon 1 Feb 2010, 3:00pm
Complex analytic Neron models
Mon 1 Feb 2010, 3:00pm-4:00pm
I will present a global construction of the Neron model for degenerating families of intermediate Jacobians; a classical case would be families of abelian varieties. The construction is based on Saito's theory of mixed Hodge modules; a nice feature is that it works in any dimension, and does not require normal crossing or unipotent monodromy assumptions. As a corollary, we obtain a new proof for the theorem of Brosnan-Pearlstein that, on an algebraic variety, the zero locus of an admissible normal function is an algebraic subvariety.
Masoud Kamgarpour
What is geometrization?
Geometrization is a process of replacing finite sets by algebraic varieties over finite field and functions on such sets by sheaves on the corresponding variety. I will explain the meaning of the above sentence and state some applications.
Valery Lunts
Mon 8 Mar 2010, 3:10pm
Categorical resolution of singularities
Mon 8 Mar 2010, 3:10pm-4:30pm
I will introduce the notion of categorical resolution of singularities which is based on the concept of
a smooth DG algebra. Then I will compare this notion with the traditional resolution in algebraic geometry and
give some examples.
Max Lieblich
Mon 15 Mar 2010, 3:10pm
How should maximal orders move?
Mon 15 Mar 2010, 3:10pm-4:30pm
I will discuss joint work in progress with Rajesh Kulkarni
on the moduli of maximal orders on surfaces. In contrast to the
"classical" case of Azumaya algebras, ramified maximal orders have
several potentially interesting moduli spaces. I will discuss three
different scheme structures on the same set of points: a naive
structure, a structure arising from a non-commutative version of
Koll\'ar's condition on moduli of stable surfaces, and a structure
that comes from hidden Azumaya algebras on stacky models of the
underlying surface. Only (?) the third admits a natural
compactification carrying a virtual fundamental class, giving rise to
potentially new numerical invariants of division algebras over
function fields of surfaces.
Vadim Vologodsky
Cartier transform in derived algebraic geometry
Abstract: Recently Kaledin proved a non-commutative generalization of
the Deligne-Illusie theorem about the de Rham complex of an algebraic
variety in characteristic p.
I will explain how his approach can be used to prove new results in
commutative algebraic geometry.
The weak approximation conjecture
Given a system of polynomial equations in some variables
depending on one parameter, when can every solution which is a power
series in the parameter be approximated to arbitrary order by
which are polynomial in the parameter? Hassett observed that a
condition is that the generic fiber is "rationally connected", i.e.,
general choice of the parameter, every pair of solutions are
interpolated
by a family of solutions which are the output of a polynomial
function in
one variable. Hassett and Tschinkel conjecture the converse holds:
if a
general fiber is rationally connected, then "weak approximation"
holds.
I will review progress by Hassett -- Tschinkel, Colliot-Th\'el\`ene
Gille, A. Knecht, Hassett, de Jong -- Starr, and Chenyang Xu. Then I
present a new perspective by Mike Roth and myself using "pseudo ideal
sheaves", a higher codimension analogue of Fulton's effective pseudo
divisors. I will also mention a theorem of Zhiyu Tian, who used this
perspective to relate weak approximation to equivariant rational
connectedness, thereby proving many new cases of weak approximation.
Mon 12 Apr 2010, 3:10pm
Rigid Cohomology for Algebraic Stacks
Mon 12 Apr 2010, 3:10pm-4:30pm
Rigid cohomology is one flavor of Weil cohomology. This entails for instance that one can asociate to a scheme X over F_p a collection of finite dimensional Q_p-vector spaces H^i(X) (and variants with supports in a closed subscheme or compact support), which enjoy lots and lots of nice properties (e.g. functorality, excision, Gysin, duality, a trace formula -- basically everything one needs to give a proof of the Weil conjectures).
Classically, the construction of rigid cohomology is a bit complicated and requires many choices, so that proving things like functorality (or even that it is well defined) are theorems in their own right. An important recent advance is the construction by le Stum of an `Overconvergent site' which computes the rigid cohomology of X. This site involves no choices and so it trivially well defined, and many things (like functorality) become transparent.
In this talk I'll explain a bit about classical rigid cohomology and the overconvergent site, and explain some new work generalizing rigid cohomology to algebraic stacks (as well as why one would want to do such a thing).
Ashkold Khovanskii
Mon 19 Apr 2010, 2:00pm SPECIAL
Moment polyhedra, semigroup of representations, and Kazarnovskii's theorem
Eric Katz
Lifting Tropical Curves in Space
Tropicalization is a technique that transforms algebraic geometric objects to combinatorial objects. Specifically, it associates polyhedral complex to subvarieties of an algebraic torus. One may ask which polyhedral complexes arise in this fashion. We focus on curves which are transformed by tropicalization to immersed graphs. By applying toric geometry and Baker's specializing of linear systems from curves to graphs, we give a new necessary condition for a graph to come from an algebraic curve. In genus 1 and in certain geometric situation, this condition specializes to the well-spacedness condition discovered by Speyer and generalized by Nishinou and Brugalle-Mikhalkin. The techniques in this talk give a combinatorial way of thinking about deformation theory which we hope will have further applications.
Alan Carey
Twisted Geometric Cycles
I aim to explain a recent paper of my collaborator Bai-Ling Wang in which he proves that there is a generalisation of the Baum-Douglas geometric cycles which realise ordinary K-homology classes to the case of twisted K-homology. We propose that these twisted geometric cycles are D-branes in string theory. There is an analogous picture for manifolds that are not string.
Derived moduli space of complexes of coherent sheaves
I will present a joint work with Behrend.
For any smooth projective variety, we construct a differential graded scheme (stack) structure on the moduli space of
complexes of coherent sheaves. The construction uses the Hochschild cochain complex of A infinty bi-modules.
As an application, we show that the DT/PT wall crossing can be intepreted as change of stability conditions on
dg schemes.
Alexander Polishchuk
Matrix factorizations and cohomological field theories
Ed Richmond
Coxeter foldings and generalized Littlewood-Richardson coefficients
Let G be a simple Lie group or Kac-Moody group and P a parabolic
subgroup.
One of the goals Schubert calculus is to understand the product
of the cohomology ring H^*(G/P) with respect to its basis of Schubert
classes. If G/P is the Grassmannian, then the structure constants
corresponding to the Schubert basis are the classical
Littlewood-Richardson
coefficients which appear in various topics such as enumerative
algebraic combinatorics and representation theory.
In this talk, I will discuss joint work with A. Berenstein in which
we give
a combinatorial formula for these coefficients in terms of the Cartan
matrix corresponding to G. In particular, our formula implies
of the "generalized" Littlewood-Richardson coefficients in the
where the off diagonal Cartan matrix entries are not equal to -1. Moreover, this positivity result does not rely on the geometry
the flag variety G/P.
Brian Osserman
Variations on the theme of Grassmannians
Motivated by applications to Brill-Noether theory and higher-rank Brill-Noether theory, we discuss several variations on Grassmannians. These include "doubly symplectic Grassmannians", which parametrize subspaces which are simultaneously isotropic for a pair of symplectic forms, "linked Grassmannians", which parametrize tuples of subspaces of a chain of vector spaces linked via linear maps, and "symplectic linked Grassmannians", which is an amalgamation of the linked Grassmannian and symplectic Grassmannian.
Okounkov bodies and toric degenerations
Given a projective variety X of dimension d, a "flag" of subvarieties Y_i, and a big divisor D, Okounkov showed how to construct a convex body in R^d, and this construction has recently been developed further in work of Kaveh-Khovanskii and Lazarsfeld-Mustata. In general, this Okounkov body is quite hard to understand, but when X is a toric variety, it is just the polytope associated to D via the standard yoga of toric geometry. I'll describe a more general situation where the Okounkov body is still a polytope, and show that in this case X admits a flat degeneration to the corresponding toric variety. This project was motivated by examples, and as an application, I'll describe some toric degenerations of flag varieties and Schubert varieties. There will be pictures of polytopes.
Sammy Black
A state-sum formula for the Alexander polynomial
Daniel Erman
Smoothability of 0-dimensional schemes
A 0-dimensional scheme is said to be "smoothable" if it deforms to a disjoint union of points. Determining if a given 0-dimensional scheme is smoothable seems to be quite a difficult problem in general, and I will survey some of the main results in this area of research, including some recent progress that is joint work with David Eisenbud and Mauricio Velasco. In particular, I will explain how Gale duality provides a geometric obstruction to smoothability.
Vladimir Chernousov
MADs world and the world of torsors
One of the central theorems of classical Lie theory is that all split Cartan subalgebras of a finite dimensional simple Lie algebra over an algebraically closed field are conjugate.This result, due to Chevalley, yields the most elegant proof that the type of the root system of a simple Lie algebra is its invariant. In infinite dimensional Lie theory maximal abelian diagonalizable subalgebras (MADs) play the role of Cartan subalgebras in the classical theory. In the talk we address the problem of conjugacy of MADs in a big class of Lie algebras which are known in the literature as extended affine Lie algebras (EALA). To attack this problem we develop a bridge which connects the world of MADs in infinite dimensional Lie algebras and the world of torsors over Laurent polynomial rings.
Log canonical singularities and F-purity for hypersurfaces
To any polynomial over a perfect field of positive characteristic, one may associate an invariant called the F-pure threshold. This invariant, defined using the Frobenius morphism on the ambient space, can be thought of as a positive characteristic analog of the well-known log canonical threshold in characteristic zero. In this talk, we will present some examples of F-pure thresholds, and discuss the relationship between F-pure thresholds and log canonical thresholds. We also point out how these results are related to the longstanding open problem regarding the equivalence of (dense) F-pure type and log canonical singularities for hypersurfaces in complex affine space.
Atsushi Kanazawa
On Pfaffian Calabi-Yau 3-folds and Mirror Symmetry
We construct new smooth CY 3-folds with 1-dimensional Kaehler moduli by pfaffian method and determine their fundamental topological invariants. The existence of CY 3-folds with the computed invariants was previously conjectured by C. van Enckevort and D. van Straten. We then report mirror symmetry for these non-complete intersection CY 3-folds. We explicitly build their mirror candidates, some of which have 2 LCSLs, and check the mirror phenomenon.
Mircea Mustata
Shokurov's ACC Conjecture for log canonical thresholds on smooth varieties
Log canonical thresholds are invariants of singularities that play an important role in birational geometry. After an introduction to these invariants, I will describe recent progress on a conjecture of Shokurov predicting the Ascending Chain Condition for such invariants in any fixed dimension. This is based on joint work with Tommaso de Fernex and Lawrence Ein.
Tue 30 Nov 2010, 12:40pm SPECIAL
MATX 1102
Invariants of singularities in zero and positive characteristic
Tue 30 Nov 2010, 12:40pm-1:40pm
Invariants of singularities are defined in birational geometry via divisorial valuations, and are computed by resolution of singularities. In positive characteristic, one defines similar invariants via the action of the Frobenius morphism. The talk will give an overview of the known results and conjectures relating the two sets of invariants.
Jim Bryan
What is the probability that two randomly chosen matrices with entries in a finite field commute? : On the motivic class of the commuting variety and related problems.
In 1960, Feit and Fine were interested in the question posed by the title and to answer it, they found a beautiful formula for the number of pairs of commuting n by n matrices with entries in the field F_q. Their method amounted to finding a stratification of the variety of commuting pairs of matrices into strata each of which is isomorphic to an affine space (of various dimensions). Consequently, their computation can be interpreted as giving a formula for the motivic class of the commuting variety, that is, its class in the Grothendieck group of varieties. We give a simple, new proof of their formula and we generalize it to various other settings. This is joint work with Andrew Morrison.
Dylan Rupel
Rank Two Quantum Cluster Algebras and Valued Quiver Representations
A quantum cluster algebra is a subalgebra of an ambient skew field of rational functions in finitely many indeterminates. The quantum cluster algebra is generated by a (usually infinite) recursively defined collection called the cluster variables. Explicit expressions for the cluster variables are difficult to compute on their own as the recursion describing them involves division inside this skew field. In this talk I will describe the rank 2 cluster variables explicitly by relating them to varieties associated to valued representations of a quiver with 2 vertices. I will also indicate to what extent the theory. I present is applicable to higher rank quantum cluster algebras.
Finite Groups of Low Essential Dimension
Informally, the essential dimension of a finite group is the minimal number of parameters required to describe any of its actions. It has connections to Galois cohomology and several open problems in algebra. I will discuss how one can use techniques from birational geometry to compute this invariant and indicate some of its applications to the Noether Problem, inverse Galois theory, and the simplification of polynomials.
Building on I. Dolgachev and V. Iskovskikh's recent work classifying finite subgroups of the plane Cremona group, I will classify all finite groups of essential dimension 2. In addition, I show that the symmetric group of degree 7 has essential dimension 4 using Yu. Prokhorov's classification of all finite simple groups with faithful actions on rationally connected threefolds.
Deformation theory
I will explain an interpretation of Illusie's results on the deformation theory of commutative rings in terms of the cohomological classification of torsors and gerbes. Then I'll show how this point of view can be used to solve some other deformation problems. I'll also indicate some deformation problems that I don't know how to approach this way.
Ben Davison
Motivic Donaldson-Thomas invariants and 3-manifolds
I will describe recent work on motivic DT invariants for 3-manifolds, which are expected to provide a refinement of Chern-Simons theory. The conclusion will be that these should be possible to define and work with, but there will be some interesting problems along the way. There will be a discussion of the problem of upgrading the description of the moduli space of flat connections as a critical locus to the problem of describing the fundamental group algebra of a 3-fold as a "noncommutative critical locus," including a result regarding topological obstructions for this problem. I will also address the question of whether a motivic DT invariant may be expected to pick up a finer invariant of 3-manifolds than just the fundamental group.
Jack Hall
The Hilbert stack
The Hilbert scheme of projective space is a fundamental moduli space in algebraic geometry. The naive generalization of the Hilbert scheme can fail to exist for some spaces of interest, however. D. Rydh and I have generalized the Hilbert scheme, to the Hilbert stack, and have shown that the Hilbert stack is always algebraic. I will describe the Hilbert stack and some of the ideas behind the proof.
Malabika Pramanik
Mon 28 Feb 2011, 3:00pm
A multi-dimensional resolution of singularities with applications to analysis
Mon 28 Feb 2011, 3:00pm-5:00pm
The structure of the zero set of a multivariate polynomial is a topic of
wide interest, in view of its ubiquity in problems of analysis, algebra,
partial differential equations, probability and geometry. The study of
such sets originated in the pioneering work of Jung, Abhyankar and
Hironaka and has seen substantial recent advances in an algebraic setting.
In this talk, I will mention a few situations in analysis where the study of
polynomial zero sets plays a critical role, and discuss prior work in this
analytical framework in two dimensions. Our main result (joint with
Tristan Collins and Allan Greenleaf) is a formulation of an algorithm for
resolving singularities of a real-analytic function in any dimension with
a view to applying it to a class of problems in harmonic analysis.
Shrawan Kumar
A generalization of Fulton's conjecture for arbitrary groups
This is a report on my joint work with Prakash Belkale and Nicolas Ressayre. We prove a generalization of Fulton's conjecture which relates intersection theory on an arbitrary flag variety to invariant theory.
Sándor Kovács
Irrational centers
Vanishing theorems and rational singularities are closely related and play important roles in classification theory as well as other areas of algebraic geometry. In this talk I will discuss these roles their interrelations and a new notion that helps understand these singularities and their connections to the singularities of the minimal model program and moduli theory of higher dimensional varieties.
Dragos Oprea
Generic strange duality for K3 surfaces
We consider moduli spaces of semistable sheaves on an elliptically fibered K3 surface, so that the first Chern class of the sheaves is a numerical section. For pairs of complementary such moduli spaces subject to numerical restrictions, we establish the strange duality isomorphism on sections of theta line bundles. We will also present applications to Brill-Noether theory for sheaves on a K3.
Sabin Cautis
Mon 4 Apr 2011, 3:00pm
Categorified Heisenberg actions on Hilbert schemes
Mon 4 Apr 2011, 3:00pm-4:00pm
I will describe an action of a quantized Heisenberg algebra on the (derived) categories of coherent sheaves on Hilbert schemes of ALE spaces (crepant resolutions of C^2/G). This action essentially lifts the actions of Nakajima and Grojnowski on the cohomology of these spaces. (Joint with Tony Licata.)
Brendan Hassett
Birational geometry of holomorphic symplectic varieties
We propose a general framework governing the intersection
properties of extremal rays of irreducible holomorphic
symplectic manifolds under the Beauville-Bogomolov form.
Our main thesis is that extremal rays associated to
Lagrangian projective subspaces control the behavior of
the cone of curves. We explore implications of this
philosophy for examples like Hilbert schemes of points
on K3 surfaces and generalized Kummer varieties. We also
present evidence supporting our conjectures in specific
cases. (joint with Y. Tschinkel)
A Uniform Description of Test Ideals and Multiplier Ideals
After reviewing some basic constructions with multiplier ideals on complex algebraic varieties, we recall the definition of multiplier ideals in positive characteristic and highlight the failure of some desirable properties to carry over in this setting. This leads us to a related measure of singularities coming from commutative algebra -- the test ideal -- which seems to exhibit better behavior than the multiplier ideal in positive characteristic. While test ideals were first introduced in the theory of tight closure, our goal in this talk will be to describe a new and algebro-geometric characterization of test ideals using regular alterations. This characterization is also holds for multiplier ideals in characteristic zero (but not in positive characteristic!!!), providing a kind of uniform description with new insight and intuition. Time permitting, we will use this result to give an analogue of Nadel Vanishing in positive characteristic.
Conan Leung
The Chinese University of Hong Kong
Mon 25 Jul 2011, 3:00pm SPECIAL
SYZ mirror symmetry for toric manifolds
Mon 25 Jul 2011, 3:00pm-5:00pm
In this talk, I will explain SYZ proposal on describing mirror symmetry as a Fourier-Mukia transformation along special Lagrangian torus fibration. By computing certain open Gromov-Witten invariants, we show that the mirror map is the same as the SYZ map for certain toric Calabi-Yau manifolds.
Motivic Donaldson-Thomas invariants for the one loop quiver with potential
In this talk I will give an introduction to Donaldson-Thomas invariants, and then their motivic incarnation. I'll discuss motivic vanishing cycles and lambda rings, before moving to the main example of the talk - the one loop quiver with potential. It turns out that the motivic DT invariants in this simple example have a neat presentation, and in a break with other worked out examples these invariants really involve the mondromy of the motivic vanishing cycle.
Artan Sheshmani
Higher rank stable pairs and virtual localization
We introduce a higher rank analog of the Pandharipande-Thomas theory of stable pairs on a Calabi-Yau threefold X. More precisely, we develop a moduli theory for frozen triples given by the data O_X^{r}(-n)-->F where F is a sheaf of pure dimension 1. The moduli space of such objects does not naturally determine an enumerative theory: that is, it does not naturally possess a perfect symmetric obstruction theory. Instead, we build a zero-dimensional virtual fundamental class by hand, by truncating a deformation-obstruction theory coming from the moduli of objects in the derived category of X. This yields the first deformation-theoretic construction of a higher-rank enumerative theory for Calabi-Yau threefolds. We calculate this enumerative theory for local P^1 using the Graber-Pandharipande virtual localization technique. In a sequel to this project (arXiv:1101.2251), we show how to compute similar invariants associated to frozen triples using Kontsevich Soibelman, Joyce-Song wall-crossing techniques.
Cox rings and pseudoeffective cones of projectivized toric vector bundles
Projectivized toric vector bundles are a large class of rational varieties that share some of the pleasant properties of toric varieties and other Mori dream spaces. Hering, Mustata and Payne proved that the Mori cones of these varieties are polyhedral and asked if their Cox rings are indeed finitely generated. We present the complete answer to this question. There are several proofs of a positive answer in the rank two case [Hausen-Suss, Gonzalez]. One of these proofs relies on the simple structure of the Okounkov body of these varieties with respect to a special flag of subvarieties. For higher ranks we study projectivizations of a special class of toric vector bundles that includes cotangent bundles, whose associated Klyachko filtrations are particularly simple. For these projectivized bundles, we give generators for the cone of effective divisors and a presentation of the Cox ring as a polynomial algebra over the Cox ring of a blowup of a projective space along a sequence of linear subspaces [Gonzalez-Hering-Payne-Suss]. As applications, we show that the projectivized cotangent bundles of some toric varieties are not Mori dream spaces and give examples of projectivized toric vector bundles whose Cox rings are isomorphic to that of M_{0,n}.
Kai Behrend
Derived Moduli of Noncommutative Projective Schemes
I will talk on joint work in progress with Behrang Noohi. We study the GIT problem given by the differential graded Lie algebra of Hochschild cochains of a finite graded algebra. This will lead to a definition of stability for non-commutative polarized projective schemes, and to the construction of quasi-projective moduli spaces for them. These moduli spaces are differential graded schemes. There may be new moduli spaces with symmetric obstruction theories coming out of this.
William Slofstra
Twisted strong Macdonald theorems
Let L be a reductive Lie algebra. The strong Macdonald theorems of Fishel, Grojnowski, and Teleman state that the cohomology algebras of L[z]/z^N and L[z,s] (where s is an odd variable) are free skew-commutative algebras with generators in certain degrees. The theorems were originally conjectured by Hanlon and Feigin as Lie algebra cohomology extensions of Macdonald's constant term identity in algebraic combinatorics. The proof uses ideas from the Kahler geometry of the loop Grassmannian.
I will explain how to extend Fishel, Grojnowski, and Teleman's ideas to generalized flag varieties of (twisted) loop groups, and consequently get strong Macdonald theorems for p[s] and p/z^N p when p is a parahoric. When p has a non-trivial parabolic component the cohomology of p/z^N p is no longer free, as it contains a factor which is isomorphic to the cohomology algebra of the flag variety of the corresponding parabolic.
Kirill Zainoulline
Equivariant pretheories and invariants of torsors
We will introduce and study the notion of an equivariant pretheory. Basic examples include equivariant Chow groups, equivariant K-theory and equivariant algebraic cobordism. As an application we generalize the theorem of Karpenko-Merkurjev on G-torsors and rational cycles; to every G-torsor E and a G-equivariant pretheory we associate a graded ring which serves as an invariant of E. In the case of Chow groups this ring encodes the information concerning the motivic J-invariant of E and in the case of Grothendieck's K_0 -- indexes of the respective Tits algebras.
Bruno Kahn
Institut de Mathematiques de Jussieu
Somekawa's K-groups and Voevodsky's Hom groups
We construct an isomorphism from Somekawa's K-group associated to a finite collection of semi-abelian varieties (or more general sheaves) over a perfect field to a corresponding Hom group in Voevodsky's triangulated category of effective motivic complexes.
Stefan Gille
Rost nilpotence for surfaces
Let X be a smooth projective scheme over a field F. We say that Rost nilpotence is true for X in the category of Chow motives with integral coefficients if for any field extension E/F the kernel of
CH_2(S x S) --> CH_2(S_E x S_E)
consists of nilpotent correspondences. In my talk I will present a proof of Rost nilpotence for surfaces over fields of characteristic zero which uses Rost's theory of cycle modules.
Daniel Moseley
Group actions on cohomology of varieties
This talk will explore a technique of using equivariant cohomology to say something about the action of a group on the cohomology of a space. In particular, we will look at examples of cohomology of flag varieties and configuration spaces. Also, we will look at a family of algebras with an algebro-geometric interpretation that admits an S_n action, and use the results we developed to make progress toward a result about these algebras.
Yuri Burda
Coverings over tori and application to Klein's resolvent problem
Topological essential dimension of a covering is the minimal dimension of a base-space such that the original covering can be induced from some covering over this base-space.
We will see how to compute the topological essential dimension for coverings over tori.
Surprisingly this question turns out to be useful in obtaining estimates in Klein's resolvent problem: what is the minimal number k such that the equation z^n+a_1z^n+...+a_n=0 with complex coefficients a_1,...,a_n can be reduced by means of a rational substitution y=R(z,a_1,...,a_n) to an equation on y depending on k algebraically independent parameters.
We will also obtain some bounds in the analogue of this question for other algebraic functions and get a sharp result for functions on C^n unramified outside of coordinate hyperplanes.
The quantum BCOV theory and higher-genus mirror symmetry
The physicists Bershadsky, Cecotti, Ooguri and Vafa argued that the mirror to the theory of Gromov-Witten invariants is provided by a certain quantum field theory on Calabi-Yau varieties. I'll describe joint work in progress with Si Li, which gives a rigorous construction of the BCOV quantum field theory. In the case of the elliptic curve, Li has shown that our theory recovers the Gromov-Witten invariants of the mirror curve, and so proving mirror symmetry in this example.
Ezra Getzler
Higher analytic stacks
Using techniques from the theory of Banach algebras, Kuranishi constructed an analytic germ of the moduli space of deformations of a holomorphic vector bundle on a compact complex manifold. In order to extend his results to deformation theory of perfect complexes, we introduce higher analytic stacks, which are simplicial Banach analytic varieties satisfying a horn filler condition modeled on that satisfied by Kan complexes. We show that there is a natural way to attach a higher analytic stack to a Banach algebra, and apply this to the deformation theory of perfect complexes. This is a joint work with Kai Behrend.
David Treumann
The coherent-constructible correspondence for toric varieties and hypersurfaces
The coherent constructible correspondence matches coherent sheaves on a toric variety to constructible sheaves on a compact torus T^n. Microlocal sheaf theory allows one to view the latter sort of object as a Lagrangian submanifold in the symplectic manifold T^n x R^n, making this a form of mirror symmetry. I will discuss this correspondence, and an extension of of it to hypersurfaces in toric varieties, which in some sense matches coherent sheaves to Legendrian submanifolds of the contact manifold T^n x S^{n-1}.
Wed 18 Jan 2012, 3:00pm SPECIAL
Algebraic Geometry Seminar / Topology and related seminars
WMAX 110 (PIMS)
Flops and about
Wed 18 Jan 2012, 3:00pm-4:00pm
Stratified flops show up in the birational geometry of symplectic varieties such as moduli spaces of sheaves. Varieties related by such flops are often derived equivalent (meaning that there is an equivalence between their derived categories of coherent sheaves). After recalling a bit about the geometry of flops I will discuss a general method for constructing such equivalences and illustrate with some examples and applications.
Chern-Simons functional and Donaldson-Thomas theory
I will survey the constructions of holomorphic Chern-Simons functionals for the moduli spaces of sheaves on CY 3-folds and several applications to the theoretical and computational aspects of Donaldson-Thomas theory.
Alexander Woo
Local complete intersection Schubert varieties
We characterize Schubert varieties (for GLn) which are local complete intersections (lci) by the combinatorial notion of pattern avoidance. For the Schubert varieties which are local complete intersections, we give an explicit minimal set of equations cutting out their neighborhoods at the identity. Although the statement only requires ordinary pattern avoidance, showing the other Schubert varieties are not lci appears to require more complicated combinatorial ideas which have their own geometric underpinnings. The Schubert varieties defined by inclusions, originally introduced by Reiner and Gasharov, turn out to be an important subclass of lci Schubert varieties. Using the explicit equations at the identity for the lci Schubert varieties, we can recover formulas for some of their local singularity invariants at the identity as well as explicit presentations for their cohomology rings.
This is joint work with Henning Ulfarsson (Reykjavik U.).
Aravind Asok
Classifying vector bundles on smooth affine varieties
If X is a finite CW complex of small dimension, information about the homotopy groups of unitary groups can be translated into cohomological classification results for complex vector bundles on X. I will explain how A^1-homotopy theory can be used in an analogous fashion in the classification of vector bundles of on smooth affine varieties of small dimension. In particular, I will explain some joint work (in progress) with J. Fasel which shows how to give a complete classification of vector bundles on smooth affine 3-folds over certain fields. No knowledge of A^1-homotopy theory will be assumed.
Andrew Morrison
Motivic invariants of quivers via dimensional reduction
We explain how the computation of motivic Donaldson-Thomas invariants associated to a quiver with potential reduces to the computation of the motivic classes of simpler quiver varieties. This has led to the calculation of these invariants for some interesting Calabi-Yau geometries derived equivalent to a quiver with potential. Here we observe q-deformations of the classical generating series.
Counting Hyperelliptic curves on Abelian Surfaces with Quasimodular Forms
In this talk we will present a formula to count the number of hyperelliptic curves on a polarized Abelian surface, up to translation. This formula is obtained using orbifold Gromov-Witten theory, the crepant resolution conjection and the Yau-Zaslow formula to related hyperelliptic curves to rational curves on the Kummer surface Km(A). We will show how this formula can be described in terms of certain generating functions studied by P. A. MacMahon, which turn out to be quasimodular forms.
Ana-Maria Castravet
Rigid curves on moduli spaces of stable rational curves and arithmetic breaks
The Mori cone of curves of the Grothendieck-Knudsen moduli space of stable rational curves with n markings, is conjecturally generated by the one-dimensional strata (the so-called F-curves). A result of Keel and McKernan states that a hypothetical counterexample must come from rigid curves that intersect the interior. In this talk I will show several ways of constructing rigid curves. In all the examples a reduction mod p argument shows that the classes of the rigid curves that we construct can be decomposed as sums of F-curves. This is joint work with Jenia Tevelev.
Angelo Vistoli
Schuola Normale Superiore (Italy)
Thu 22 Mar 2012, 2:00pm
The Nori correspondence
Thu 22 Mar 2012, 2:00pm-3:00pm
Let X be a variety over a field k, with a fixed rational point x_0 in X(k). Nori defined a profinite group scheme N(X,x_0), usually called Nori's fundamental group, with the property that homomorphisms N(X,x_0) to a fixed finite group scheme G correspond to G-torsors P --> X, with a fixed rational point in the inverse image of x_0 in P. If k is algebraically closed this coincides with Grothendieck's fundamental group, but is in general very different. Nori's main theorem is that if X is complete, the category of finite-dimensional representations of N(X,x_0) is equivalent to an abelian subcategory of the category of vector bundles on X, the category of essentially finite bundles.
After describing Nori's results, I will explain my work in collaboration with Niels Borne, from the University of Lille, in which we extend them by removing the dependence on the base point, substituting Nori's fundamental group with a gerbe (in characteristic 0 this had already been done by Deligne), and give a simpler definition of essentially finite bundle, and a more direct and general proof of the correspondence between representations and essentially finite bundles.
Sandor Kovacs
Vanishing theorems and their failure in positive characteristic
The Kodaira vanishing theorem and its generalizations are extremely important tools in higher dimensional geometry and the failure of these theorems in positive characteristic causes great difficulties in extending the existing theories to that realm. In this talk I will discuss new results about cases where an appropriate vanishing theorem holds and cases where the expected one fails even in characteristic zero. These results are joint works (separately) with Christopher Hacon and with János Kollár.
Victoria Hoskins
Wed 29 Aug 2012, 3:00pm
Finite and infinite stratifications
Wed 29 Aug 2012, 3:00pm-4:00pm
In this talk we compare several different stratifications of parameter spaces of sheaves. The starting point is the infinite Yang-Mills stratification of the space of vector bundles on a compact Riemann surface, which is equal to the stratification by Harder-Narasimhan types. We then go on to look at finite stratifications of some quot schemes associated to a certain group action (the geometric invariant theory quotient for this action is a moduli space for sheaves) and relate this to a stratification of the quot scheme by Harder-Narasimhan types. Finally we discuss the limitations of the finite stratifications and how we could instead modify the set up to get infinite stratifications.
IMJ at Universite Paris 7
Motivic DT invariants of -2 curves
-2 curves are a favorite toy of birational geometers working in dimension 3 - they are slightly more complicated cousins of the resolved conifold. In this talk I'll try to give a reasonably self contained introduction to the theory of motivic DT invariants, and integrality, by explaining how this theory plays out in the case of "noncommutative" -2 curves. It turns out that, in common with the noncommutative conifold, the motivic DT partition function for -2 curves have a strikingly nice form, confirming the integrality conjecture in this case.
ESB 2012
Trilinear forms and Chern classes of Calabi-Yau threefolds
Let X be a Calabi-Yau threefold. We study the symmetric trilinear form on the integral second cohomology group of X defined by the cup product. Our study is motivated by C.P.C. Wall's classification theorem, which roughly says that the diffeomorphism class of a spin sixfold is determined by the trilinear form. We investigate the interplay between the Chern classes and the trilinear form of X, and demonstrate some numerical relations between them. If time permits, we also discuss some properties of the associated cubic form. This talk is based on a joint work with P.H.M. Wilson.
Dusty Ross
The Gerby Gopakumar-Marino-Vafa Formula
The Gopakumar-Marino-Vafa formula, proven almost ten years ago, evaluates certain triple Hodge integrals on moduli spaces of curves in terms of Schur functions. It has since been realized that the GMV formula is a special case of the Gromov-Witten/Donaldson-Thomas correspondence for Calabi-Yau threefolds.
In this talk, I will introduce an orbifold generalization of the GMV formula which evaluates certain abelian Hodge integrals in terms of loop Schur functions. I will introduce local Z_n gerbes over the projective line and show how the gerby GMV formula can be used to prove the GW/DT correspondence for this class of orbifolds. With the remaining time, I will sketch the main ideas in the proof of the formula and discuss generalizations to other geometries.
Shane Cernele
Essential Dimension and Error-Correcting Codes
Let p be a prime, r >= 3, and n_i = p^{a_i} for positive integers a_1,...,a_r. Set G = GL_{n_1} x ... x GL_{n_r}, and let \mu be a central subgroup of G. The Galois cohomology set H^1(K, G/\mu) classifies r-tuples of central simple algebras satisfying linear equations in the Brauer group Br(K). We study the essential dimension of G/\mu by constructing the 'code' associated to /mu.
Mathieu Huruguen
Toric embeddings over an arbitrary field
The equivariant embeddings of a split torus have been well-known since the 70s. The isomorphism classes of such embeddings are classified by combinatorial objects called fans (after Demazure). In this talk, we address the classification of the embeddings of a non-necessary split torus and ask: Are the isomorphisms classes of such embeddings classified by Galois-stable fans? If time permits, we will discuss the analogous results in the setting of spherical homogeneous spaces.
Colleen Robles
Singular loci of cominuscule Schubert varieties
The Schubert subvarieties of a rational homogeneous variety X are distinguished by the fact that their homology classes form an additive basis of the integer homology of X. In general, the Schubert varieties are singular.
The cominuscule rational homogeneous varieties are those admitting the structure of a compact Hermitian symmetric space (eg. complex Grassmannians). In this case, type-dependent characterizations of the singular loci are known.
I will discuss a type-independent description, by representation theoretic data, of the singular loci. The result is based on a characterization (joint with D. The) of the Schubert varieties by an non-negative integer and a marked Dynkin diagram.
(If there is time left, I will discuss the project in which the integer-diagram characterization arose as a technical lemma. This work aims to determine whether or not the Schubert classes admit any algebraic representatives (other than the Schubert varieties). It is a remarkable consequence of Kostant's work that these algebraic representatives are solutions of a system of PDE; as a consequence, differential geometric techniques may be applied to this algebro-topological question.)
Edward Richmond
Eigenvalues of hermitian matrices and equivariant cohomology of Grassmannians
One remarkable application of classical Schubert calculus on the cohomology of the Grassmannian is its close connection to the eigenvalue problem on sums of hermitian matrices. The eigenvalue problem asks: Given three sequences of real numbers, do there exist hermitian matrices A+B=C with eigenvalues given by the three sequences? This problem has a generalization to eigenvalues of majorized sums of hermitian matrices where we replace "A+B=C" with "A+B>C".
In this talk, I discuss joint work with D. Anderson and A. Yong where we show that the eigenvalue problem on majorized sums is related to the Schubert calculus on the torus-equivariant cohomology of the Grassmannian in the same way that classical Schubert calculus is related to eigenvalue problem on usual sums of Hermitian matrices. One consequence of this connection is a generalization of the celebrated saturation theorem to T-equivariant Schubert calculus.
Jesse Wolfson
Strictifying Higher Principal Bundles
Higher stacks arise in many contexts in algebraic geometry and differential topology. The simplest type are higher principal bundles, special cases of which include principal bundles and n-gerbes. Locally, these objects are presentable by higher cocycles on a hypercover. With ordinary principal bundles, we obtain a bundle from a cocycle by using the cocycle to construct a Lie groupoid over the trivial bundle on the cover, and then passing to its orbit space. We establish the existence of an analogous construction for arbitrary higher principal bundles. Unpacking this construction in examples, we recover the familiar definitions of principal bundles, bundle gerbes, multiplicative gerbes and their equivariant versions, now seen as instances of a single construction. Applications beyond this include establishing a representability criterion for connected simplicial presheaves, and a Lie's 3rd theorem for finite dimensional L_oo-algebras.
Amnon Yekutieli
Ben Gurion University
Residues and Duality for Schemes and Stacks
Let K be a regular noetherian commutative ring. I consider finite type commutative K-algebras and K-schemes. I will begin by explaining the theory of rigid residue complexes on K-algebras, that was developed by J. Zhang and myself several years ago. Then I will talk about the geometrization of this theory: rigid residue complexes on K-schemes and their functorial properties. For any map between K-schemes there is a rigid trace homomorphism (that usually does not commute with the differentials). When the map of schemes is proper, the rigid trace does commute with the differentials (this is the Residue Theorem), and it induces Grothendieck Duality.
Then I will move to finite type Deligne-Mumford K-stacks. Any such stack has a rigid residue complex on it, and for any map between stacks there is a trace homomorphism. These facts are rather easy consequences of the corresponding facts for schemes, together with etale descent. I will finish by presenting two conjectures, that refer to Grothendieck Duality for proper maps between DM stacks. A key condition here is that of tame map of stacks.
Olivier Benoist
ENS Ulm (Paris)
Tue 13 Nov 2012, 3:00pm
On the birational geometry of the parameter space for codimension 2 complete intersections
Tue 13 Nov 2012, 3:00pm-4:00pm
Codimension 2 complete intersections in P^N have a natural parameter space that is a projective bundle over a projective space given by the data of the two equations.
In this talk, we will be interested in the birational geometry of this parameter space. In particular, is it a Mori dream space? If this is the case, is it possible to describe its MMP explicitly? We will give motivations for these questions and answers in particular cases.
Yunfeng Jiang
University of Utah/Imperial College London
On the Crepant Transformation Conjecture
Let X and X' be two smooth Deligne-Mumford stacks. We call dash arrow X-->X' a Crepant Transformation if there exists a third smooth Deligne-Mumford stack Y and two morphisms \phi:Y-> X, \phi': Y-> X' such that the pullbacks of canonical divisors are equivalent, i.e. \phi^*K_{X}\cong \phi'^*K_{X'}. The crepant transformation conjecture says that the Gromov-Witten theory of X and X' is equivalent if X-->X' is a crepant transformation. This conjecture was well studied in two cases: the first one is the case when X and X' are both smooth varieties; the other is the case that there is a real morphism X-> |X'| to the coarse moduli space of X', resolving the singularities of X'. In this talk I will present some recent progress for this conjecture, especially in the case when both X and X' are smooth Deligne-Mumford stacks.
Ramified Satake Isomorphisms
I will explain how to associate a Satake-type isomorphism to certain characters of the compact torus of a split reductive group over a local field. I will then discuss the geometric analogue of this isomorphism. (Joint work with Travis Schedler).
John Calabrese
Donaldson-Thomas invariants and birational transformations
I'll discuss two results regarding how DT invariants (of smooth and projective Calabi-Yau threefolds) change under birational modifications. The first deals with flops and the second is related to the McKay correspondence and work of Jim Bryan and David Steinberg.
Thu 3 Jan 2013, 3:30pm SPECIAL
Semiample Bertini Theorems over Finite Fields
Thu 3 Jan 2013, 3:30pm-4:30pm
For a smooth projective variety over a finite field, Poonen's Bertini Theorem computes the probability that a high degree hypersurface section of that variety will be smooth. We prove a semiample generalization of Poonen's result, where the probability of smoothness is computed as a product of local probabilities taken over the fibers of a corresponding morphism. This is joint with Melanie Matchett Wood.
Note for Attendees
ESB 4133 is the library room attached to the PIMS lounge.
Vivek Shende
Tue 8 Jan 2013, 3:30pm SPECIAL
Special divisors on hyperelliptic curves
Tue 8 Jan 2013, 3:30pm-4:30pm
A divisor on a curve is called "special'' if its linear equivalence class is larger than expected. On a hyperelliptic curve, all such come from pullbacks of points from the line. But one can ask subtler questions. Fix a degree zero divisor Z; consider the space parameterizing divisors D where D and D+Z are both special. In other words, we wish to study the intersection of the theta divisor with a translate; the main goal is to understand its singularities and its cohomology.
The real motivation comes from number theory. Consider, in products of the moduli space of elliptic curves, points whose coordinates all correspond to curves with complex multiplication. The Andre-Oort conjecture controls the Zariski closure of sequences of such points (and in this case is a theorem of Pila) and a rather stronger equidistribution statement was conjectured by Zhang. The locus introduced above arises naturally in the consideration of a function field analogue of this conjecture. This talk presents joint work with Jacob Tsimerman.
Behrend's function is constant on Hilb^n(C^3)
By a theorem of Behrend Donaldson-Thomas invariants can be defined interns of a certain constructible function. We will compute this function at all points in the Hilbert scheme of points in three dimensions and see that it is constant. As a corollary we see that this Hilbert scheme of points is generically reduced and its components have the same dimension mod 2. This gives an application of the techniques of BPS state counting to a problem in Algebraic Geometry.
Thu 17 Jan 2013, 3:00pm SPECIAL
MATH 126 (previously announced as ESB 4127)
Categorical actions on Hilbert schemes of points of C^2
Thu 17 Jan 2013, 3:00pm-4:00pm
We define an action of a Heisenberg algebra on categories of coherent sheaves on Hilbert schemes of points of C^2. This lifts the constructions of Nakajima and Grojnowski from cohomology to derived categories. Vertex operator techniques are then used to extend this to an action of sl_infty. We end with applications to knot homology and a discussion of future research directions.
Theo Johnson-Freyd
Lattice Poisson AKSZ Theory
AKSZ Theory is a topological version of the Sigma Model in quantum field theory, and includes many of the most important topological field theories. I will present two generalizations of the usual AKSZ construction. The first is closely related to the generalization from symplectic to Poisson geometry. (AKSZ theory has already incorporated an analogous step from the geometry of cotangent bundles to the geometry of symplectic manifolds.) The second generalization is to phrase the construction in an algebrotopological language (rather than the usual language of infinite-dimensional smooth manifolds), which allows in particular for lattice versions of the theory to be proposed. From this new point of view, renormalization theory is easily recognized as the way one constructs strongly homotopy algebraic objects when their strict versions are unavailable. Time permitting, I will end by discussing an application of lattice Poisson AKSZ theory to the deformation quantization problem for Poisson manifolds: a _one_-dimensional version of the theory leads to a universal star-product in which all coefficients are rational numbers.
Alon Levy
Isotriviality and Descent of Polarized Morphisms
(joint with A. Bhatnagar)
Suppose that a polarized self-morphism \phi of X dominates a polarized self-morphism \psi of Y. Szpiro and Tucker asked if, if \phi is isotrivial, then \psi also descends to an isotrivial morphism. We give an affirmative answer in a large set of cases, including the case Y = P^1. At heart is a result of Petsche, Szpiro, and Tepper on isotriviality and potential good reduction for self-maps of P^n, which we extend to more general polarized self-morphisms of projective varieties.
Benjamin Young
The Combinatorial PT-DT correspondence
I will discuss a combinatorial problem which comes from algebraic geometry. The problem, loosely, is to show that two theories for "counting" "curves" (Pandharipande-Thomas theory and reduced Donaldson-Thomas theory) give the same answer. I will prove a combinatorial version of this correspondence in a special case (X is toric Calabi-Yau), where the difficult geometry reduces to a study of the "topological vertex'' (a certain generating function) in these two theories. The combinatorial objects in question are plane partitions, perfect matchings on the honeycomb lattice and the double dimer model.
There will be many pictures. This is a combinatorics talk, so no algebraic geometry will be used, except as an oracle for predicting the answer.
Davesh Maulik
Thu 4 Apr 2013, 4:10pm SPECIAL
Stable pairs and the HOMFLY polynomial
Thu 4 Apr 2013, 4:10pm-5:10pm
Given a planar curve singularity, Oblomkov and Shende conjectured a precise relationship between the geometry of its Hilbert scheme of points and the HOMFLY polynomial of the associated link. I will explain a proof of this conjecture, as well as a generalization to colored invariants proposed by Diaconescu, Hua, and Soibelman.
Martha Precup
Paving Hessenberg Varieties by Affines
Hessenberg varieties are closed subvarieties of the full flag variety. Examples of Hessenberg varieties include both Springer fibers and the flag variety. In this talk we will show that Hessenberg varieties corresponding to nilpotent elements which are regular in a Levi factor of the Lie algebra are paved by affines. We then provide a partial reduction from paving Hessenberg varieties for arbitrary elements to paving those corresponding to nilpotent elements, generalizing results of Tymoczko.
Jim Carrell
Cohomology of Springer Fibres and Springer's Weyl group action via localization
I will apply Martha Precup's theorem on affine pavings to describe the equivariant cohomology algebras of (regular) Springer fibres in terms of certain Weyl group orbits. This will also yield a simple description of Springer's representation of W on the cohomology of the above Springer fibres.
Alex Yong
Varieties in flag manifolds and their patch ideals
This talk addresses the problem of how to analyze and discuss singularities of a variety X that "naturally'' sits inside a flag manifold. Our three main examples are Schubert varieties, Richardson varieties and Peterson varieties. The overarching theme is to use combinatorics and commutative algebra to study the "patch ideals", which encode local coordinates and equations of X. Thereby, we obtain formulas and conjectures about X's invariants. We will report on projects with (subsets of) Erik Insko (Florida Gulf Coast U.), Allen Knutson (Cornell), Li Li (Oakland University) and Alexander Woo (U. Idaho).
Thu 23 May 2013, 3:00pm SPECIAL
4127 ESB (PIMS Video conference room)
An Archimedean Height Pairing on the Equivalence Relation Defining Bloch's Higher Algebraic Cycle Groups
Thu 23 May 2013, 3:00pm-4:00pm
The existence of a height pairing on the equivalence relation defining Bloch's higher cycle groups is a surprising consequence of some recent joint work by myself and Xi Chen on a nontrivial K_1-class on a self-product of a general K3 surface. I will explain how this pairing comes about.
Kirill Zaynullin
Tue 6 Aug 2013, 3:10pm
Formal group laws and divided difference operators
Tue 6 Aug 2013, 3:10pm-4:10pm
We discuss possible generalizations of the concept of Schubert and Grothendieck polynomials to the context of an arbitrary algebraic oriented cohomology theory. We apply these techniques to a rational formal group law and
obtain formulas for the respective polynomials in the A_n-cases. This is a joint project with C. Zhong.
Balin Fleming
Mon 9 Jun 2014, 3:00pm
Log jet schemes
Mon 9 Jun 2014, 3:00pm-4:00pm
The classical example of a log scheme is a variety X with a normal crossing divisor D. One can study differential forms on X with logarithmic (that is, order one) poles along D. Dual to these are log tangent vectors on (X, D), which have "zeroes along D." As ordinary jet schemes generalise tangent spaces, log jet schemes generalise log tangent spaces. We'll introduce the construction of log jet schemes for log schemes in the sense of K. Kato, which replace the divisor D with some combinatorial data, and some of their properties. This talk won't assume familiarity with jet schemes or log geometry.
Thu 20 Aug 2015, 2:30pm
PIMS, Room 4127
Tiling, SYZ and modular forms
Thu 20 Aug 2015, 2:30pm-3:30pm
I will introduce a class of Calabi-Yau manifolds associated to the polytope tilings. Their mirrors provide new insights in the toric mirror symmetry, and are also closely related to certain modular forms. This is a joint work with Siu-Cheong Lau.
Michael Thaddeus
The universal implosion and the multiplicative Horn problem
The multiplicative Horn problem asks what constraints the eigenvalues of two n x n unitary matrices place on the eigenvalues of their product. The solution of this problem, due to Belkale, Kumar, Woodward, and others, expresses these constraints as a convex polyhedron in 3n dimensions and describes the facets of this polyhedron more or less explicitly. I will explain how the vertices of the polyhedron may instead be described in terms of fixed points of a torus action on a symplectic stratified space, constructed as a quotient of the so-called universal group-valued implosion.
Clemens Koppensteiner
Hochschild cohomology of torus-equivariant D-modules
In this talk I will discuss how to compute the Hochschild cohomology
of the category of D-modules on a quotient stack via a
compactification of the diagonal morphism. I will then apply this
construction to the case of quotients by a torus and describe the
Hochschild cohomology as the cohomology of a D-module on the loop
space of the quotient stack. This work is motivated by a desire to
understand the D-module equivalent of singular support of coherent
sheaves in Geometric Langlands.
Curve counting on Abelian varieties, modular forms, and the combinatorics of box counting
An Abelian variety (of complex dimension g) is an algebraic geometer's version of a torus — it is a variety which is topologically equivalent to a (real) 2g-dimensional torus. Geometers consider the problem of counting the number of curves on an Abelian variety subject to some set of constraints. In dimensions g=1,2, and 3, these geometric numbers have a surprising connections to number theory and combinatorics. They occur as the coefficients of Fourier expansions of various modular forms and they can also be determined in terms of combinatorics of 2D and 3D partitions (a.k.a. box counting). We illustrate this using only elementary ideas from topology and combinatorics in the case of g=1. For g=2 and g=3, we describe recent theorems and conjectures which complete determine the enumerative geometry of Abelian surfaces and threefolds in terms of Jacobi forms and in the process we indicate how Jacobi forms arise from the combinatorics of box counting.
Nathan Ilten
K-Stability for Fano Varieties with Torus Action
It has been recently shown by Chen-Donaldson-Sun that the existence of a Kähler-Einstein metric on a Fano manifold is equivalent to the property of K-stability. In general, however, this does not lead to an effective criterion for deciding whether such a metric exists, since verifying the property of K-stability requires one to consider infinitely many special degenerations called test configurations. I will discuss recent joint work with H. Süß in which we show that for Fano manifolds with complexity-one torus actions, there are only finitely many test configurations one needs to consider. This leads to an effective method for verifying K-stability, and hence the existence of a Kähler-Einstein metric. As an application, we provide new examples of Kähler-Einstein Fano threefolds.
Projectivity of the moduli space of stable log-varieties
This is a report on joint work with Zsolt Patakfalvi. We prove a strengthening of Kollár's Ampleness Lemma and use it to prove that any proper coarse moduli space of stable log-varieties of general type is projective. We also confirm the Iitaka-Viehweg conjecture on the subadditivity of log-Kodaira dimension for fiber spaces whose general fiber is of log general type.
Ivan Martino
Motivic classes of classifying stacks and their invariants
After introducing the class of the classifying stack of a (finite) group, BG, in the Grothendieck ring of algebraic stacks, I will present certain cohomological invariants for a group - the Ekedahl invariants.
I am going to show that the class of BG is trivial if G is a finite subgroup of GL_3(k) or if G is a finite linear (or projective) reflection group. (k is a algebraically closed field of characteristic zero.) I will also show that the Ekedahl invariants of the discrete 5-Heisenberg group are trivial.
These results relate naturally to Noether's Problem and to its obstruction, the Bogomolov multiplier.
At the end of the talk, I will link these results to the study of the motivic classes of the quotient varieties V/G by showing that such classes and the classes of BG exhibit the same combinatorial structure. Therefore, despite the title and technical terminology I will aim at making the talk enjoyable also by the combinatorial community.
(Partial joint work with Emanuele Delucchi.)
Luis Garcia Martinez
Theta lifts and currents on Shimura varieties
The Shimura varieties X attached to orthogonal and unitary groups come equipped with a large collection of so-called special cycles. Examples include Heegner divisors on modular curves and Hirzebruch-Zagier cycles on Hilbert modular surfaces. We will review work of Borcherds and Bruinier using regularised theta lifts for the pair (SL_2,O(V)) to construct Green currents for special divisors. Then we will explain how to construct other interesting currents on X using the dual pair (Sp_4,O(V)). We will show that one obtains currents in the image of the regulator map of a certain motivic complex of X. Finally, we will describe how an argument using the Siegel-Weil formula allows to relate the values of these currents to the product of a special value of an L-function and a period on a certain subgroup of Sp_4.
Kalle Karu
Non-finitely generated Cox rings
Cox rings of algebraic varieties were defined by Hu and Keel in relation to the minimal model program. The main question in the theory is to determine if the Cox ring of a variety is finitely generated. Such varieties are called Mori Dream Spaces. In this talk I will discuss examples of varieties that are not Mori Dream Spaces. These include toric surfaces blown up at a point and the moduli spaces of rational curves with n points. This is a joint work with Jose Gonzalez.
University of Colorado, Boulder
Olsson fans and logarithmic Gromov-Witten theory
Given a scheme X and a normal crossings divisor D in X, the Olsson fan of X and D is an algebraic stack that encodes the combinatorics of the components of D and their intersections. I will describe Olsson fans and show how they are constructed. Then I will discuss the moduli space of stable maps from curves into an Olsson fan, and highlighting a number of applications to Gromov-Witten theory.
Harold Williams
Moduli Spaces of Microlocal Sheaves and Cluster Combinatorics
We explore a relationship between combinatorics and certain moduli spaces appearing in symplectic geometry. The combinatorics comes from the theory of cluster algebras, a kind of unified theory of canonical bases in representation theory and algebraic geometry. Some basic features of cluster algebras are that they are defined from purely combinatorial data (for example, a quiver) and they are coordinate rings of varieties covered by algebraic tori with transition functions of a special, universal form. Despite the originally representation-theoretic motivation for the subject, connections between cluster theory and symplectic geometry emerged later through the appearance of similar formulae in wall-crossing and mirror symmetry.
We will discuss recent work expanding on this connection, in particular providing a universal framework for interpreting cluster varieties as moduli spaces of objects in Fukaya categories of Weinstein 4-manifolds. In simple examples these moduli spaces reduce to well-known ones, such as character varieties of surfaces and positroid cells in the Grassmannian. An accompanying theme, which plays a key role both technically and in relating the symplectic perspective to more standard representation-theoretic ones, is the role of categories of microlocal sheaves as topological counterparts of Fukaya categories. This is joint work with Vivek Shende, David Treumann, and Eric Zaslow.
Irena Peeva
Matrix Factorizations for Complete Intersections
The concept of basis of a vector space over a field generalizes to the concept of generators of a module over a ring. However, generators carry very little information about the structure of the module, in contrast to bases, which are very useful in the study of vector spaces. Hilbert introduced the approach to describe the structure of modules by free resolutions. Hilbert's Syzygy Theorem shows that minimal free resolutions over a polynomial ring are finite. By a result of Serre, it follows that most minimal free resolutions over quotient rings are infinite. We will discuss the structure of such resolutions. The concept of matrix factorization was introduced by Eisenbud 35 year ago, and it describes completely the asymptotic structure of minimal free resolutions over a hypersurface. Matrix factorizations have applications in many fields of mathematics: for the study of cluster algebras, Cohen-Macaulay modules, knot theory, moduli of curves, quiver and group representations, and singularity theory. Starting with Kapustin and Li, physicists discovered amazing connections with string theory. In a current joint work with Eisenbud, we introduce the concept of matrix factorization for complete intersection rings and show that it suffices to describe the asymptotic structure of minimal free resolutions over complete intersections.
Zijun Zhou
Relative orbifold Donaldson-Thomas theory and local gerby curves
In this talk I will introduce the generalization of relative Donaldson-Thomas theory to 3-dimensional smooth Deligne-Mumford stacks. We adopt Jun Li's construction of expanded pairs and degenerations and prove an orbifold DT degeneration formula. I'll also talk about the application in the case of local gerby curves, and its relationship to the work of Okounkov-Pandharipande and Maulik-Oblomkov.
Anthony Licata
The braid group, the free group, and 2-linearity
The ADE braid group acts faithfully on the derived category of coherent sheaves on the resolution of the associated Kleinian singularity. In other words, the ADE braid groups are "2-linear" groups. In a similar spirit, the free group is a 2-linear group. In this talk we'll describe a few proofs of these results and explain how spherical twists and triangulated categories are related to some open problems in group theory.
K-theoretic geometric Satake
The geometric Satake equivalence relates the category of perverse sheaves on the affine Grassmannian and the representation category of a semisimple group G. We will discuss a quantum K-theoretic version of this equivalence. In this setup the representation category of G is replaced with (a quantum version) of coherent sheaves on G/G. This is joint work Joel Kamnitzer.
Jae-Suk Park
Algebraic Geometry Seminar / Probability Seminar
Homotopy Theory of Probability Spaces
The notion of a homotopy probability space is an enrichment of the notion of an algebraic probability space with ideas from algebraic homotopy theory. This enrichment uses a characterization of the laws of random variables in a probability space in terms of symmetries of the expectation. The laws of random variables are reinterpreted as invariants of the homotopy types of infinity morphisms between certain homotopy algebras. The relevant category of homotopy algebras is determined by the appropriate notion of independence for the underlying probability theory. This theory will be both a natural generalization and an effective computational tool for the study of classical algebraic probability spaces, while keeping the same central limit.
This talk is focused on the commutative case, where the laws of random variables are also described in terms of certain affinely flat structures on the formal moduli space of a naturally defined family attached to the given algebraic probability space, which the relevant category is the homotopy category of L_\infty-algebras. Time permitting, I will explain a example of homotopy probability space which law corresponds to variations Hodge structures on a toric hypersurface.
Sam Molcho
Moduli of Logarithmic Stable Toric Varieties
I am going to discuss Alexeev's and Brion's moduli space parametrizing maps
from broken toric varieties into a fixed toric variety V. Following ideas
of Olsson, I will explain how one can obtain a modular description of the
main irreducible component of Alexeev's and Brion's space, using an
analogous moduli space K(V) parametrizing logarithmic maps from broken
toric varieties into V. The resulting space K(V) is in fact a toric stack
-- it is a stacky enrichment of an appropriate Chow quotient of V. I will
conclude by explaining why K(V) and the Chow quotient stack coincide, and
describe explicitly the combinatorial data that determine the latter.
Dan Abramovich
Factorization of birational maps, with a shot of good energy
Joint work with Michael Temkin, we extend the old results with Karu, Matsuki and Wlodarczyk from varieties to qe schemes and use this to prove factorization for various other categories.
Georg Oberdieck
Gromov-Witten theory of K3 x P1 and Quasi-Jacobi forms
Let S be a K3 surface. Generating series of Gromov-Witten invariants of the product geometry SxP1 are conjectured to be quasi-Jacobi forms. We sketch a proof of this conjecture for classes of degree 1 or 2 over P1 using genus bounds on hyperelliptic curves in K3 surfaces by Ciliberto and Knutsen. This has applications to a GW/Hilb correspondence for K3 surfaces, and curve counting on SxE, where E is an elliptic curve.
Sam Gunningham
Categorical Harmonic Analysis on Reductive groups
In this talk I will survey some recent and ongoing work of myself and collaborators (David Ben-Zvi, David Nadler, Hendrik Orem), and others, concerning certain topological field theories associated to a complex reductive group G. The basic example of such a theory, assigns the cohomology of the character variety (i.e. moduli of representations of the fundamental group) to a topological surface. To a point, it assigns the categorical group algebra of D-modules on G. I will discuss various approaches to studying this theory, including work from my thesis on parabolic induction and restriction functors, work in progress with Ben-Zvi and Nadler on a monoidal quantization of the the group scheme of regular centralizers using translation functors on Whittaker modules, and a categorical highest weight theorem with Ben-Zvi, Nadler and Orem. Our work is partly motivated by the "Arithmetic Harmonic Analysis" developed by Hausel, Rodriguez-Villegas, and Lettalier, to study the cohomology of character and quiver varieties.
Francesco Sala
Western Ontario
Hall algebras and sheaves on surfaces
Hall algebras play a prominent role in the interactions between algebraic geometry and representation theory. Recently, "refined" versions of them, called K-theoretic Hall algebras, were introduced by Schiffmann and Vasserot. They have notable connections with the geometric Langlands correspondence, the theory of quantum groups and gauge theories.
In the first part of the talk, I will give an overview of the theory of Hall algebras. In the second part, I will describe some (new) examples of K-theoretic Hall algebras. These algebras are related to some stacks of a certain kind of sheaves on noncompact surfaces. (Work in progress with Olivier Schiffmann.)
Prime Decomposition for the index of a Brauer class
An Azumaya algebra of degree n is an algebra locally isomorphic to an nxn matrix algebra, a concept that generalizes that of central simple algebras over fields. The Brauer group consists of equivalence classes of Azumaya algebras and the index of a class in the Brauer group is defined to be the greatest common divisor of the degrees of all Azumaya algebras in that class.
Suppose p and q are relatively prime positive integers. Whereas an Azumaya algebra of degree pq need not, in general, decompose as a tensor product of algebras of degrees p and q, we show that a Brauer class of index pq does decompose as a sum of Brauer classes of indices p and q. The argument requires only the representation theory of GLn, and therefore establishes the result in contexts where one does not have recourse to the theory of fields, for instance in the theory of Azumaya algebras on topological spaces.
Ádám Gyenge
Hilbert scheme of points on simple singularities
Given a smooth surface, the generating series of Euler characteristics of its Hilbert schemes of points can be given in closed form by (a specialisation of) Goettsche's formula. I will discuss a generalisation of this formula to surfaces with rational double points. A certain representation of the affine Lie algebra corresponding to the surface singularity (via the McKay correspondence), and its crystal basis theory, play an important role in our approach. Joint work with András Némethi and Balázs Szendrői.
Roberto Pirisi
The Picard group of the universal abelian variety and the Franchetta conjecture for abelian varieties (joint work with R.Fringuelli)
Let g>2 be a positive integer, and let M_g be the moduli space of smooth curves of genus g over \mathbb{C}. The classical Franchetta conjecture asserts that the Picard group of the generic curve C_{\mu} over Mg is freely generated by its cotangent bundle. It was proved by Arbarello and Cornalba in 1980, Then Mestrano ('87) and Kouvidakis ('91) deducted the Strong Franchetta conjecture, which asserts that the rational points of the relative picard scheme Pic_{C_{\mu} / \mu} are precisely the multiples of the cotangent bundle.
We will show that a suitably modified version of the Franchetta conjecture holds for a different moduli problem over \mathbb{C}, that of principally polarised abelian varieties (p.p.a.v.) of genus g\geq 3 with n-level structure. The abelian Franchetta conjecture states that the generic p.p.a.v. of genus g with n-level structure X_{g,n} has Picard group isomorphic to \mathbb{Z} \oplus (\mathbb{Z}/n\mathbb{Z})^{2g}, where the free part is generated by the bundle inducing the polarization, and the torsion part comes from the level structure.
In the abelian case, the ``weak" statement immediately implies the corresponding ``strong" statement regarding the rational points of the relative Picard scheme. Using duality, we will use this to compute the Picard group of the universal abelian variety \mathscr{X}_{g,n} over the moduli stack \mathscr{C}_{g,n} of genus g p.p.a.v. with n-level structure.
Mattia Talpo
Taking roots vs taking logarithms
I will report on joint work with D. Carchedi, S. Scherotzke and N. Sibilla, about a comparison between two objects obtained from a fs log scheme over the complex numbers: the "infinite root stack" and the "Kato-Nakayama space". I will also hint at more recent work that explains how parabolic sheaves (with real or rational weights) interact with the picture.
I will be as little technical as possible and focus on examples rather than on the general theory.
Eleonore Faber
Noncommutative resolutions of singularities and the McKay correspondence
Motivated by algebraic geometry, one studies non-commutative analogs of resolutions of singularities. In short, a non-commutative resolution (=NCR) of a commutative ring R is an endomorphism ring of a certain R-module of finite global dimension. However, it is not clear how to construct non-commutative resolutions in general and which structure they have. The most prominent example of NCRs comes from the classical McKay correspondence that relates the geometry of so-called Kleinian surface singularities with the representation theory of finite subgroups of SL(2,\mathbb{C}).
In this talk we will first review this fascinating result, exhibiting the connection to the ubiquitious Coxeter-Dynkin diagrams. Moreover, we will comment on an algebraic version of the correspondence, due to Maurice Auslander.
This leads to joint work in progress with Ragnar Buchweitz and Colin Ingalls about a version of the McKay correspondence when G in GL(n,\mathbb{C}) is a finite group generated by reflections: The group G acts linearly on the polynomial ring S in n variables over \mathbb{C}. When G is generated by reflections, then the discriminant D of the group action of G on S is a hypersurface with a singular locus of codimension 1. We give a natural construction of a NCR of the coordinate ring of D as a quotient of the skew group ring A=S*G. We will explain this construction, which gives a new view on Knörrer's periodicity theorem for matrix factorizations and allows to extend Auslander's theorem to reflection groups.
T-structures on coherent sheaves and categorical actions
I will review the notion of a t-structure and discuss some recent uses of t-structures on categories of coherent sheaves in (geometric) representation theory. After reviewing some traditional methods to obtain t-structures I will present a new construction that uses categorical Lie actions. As an application one recovers the category of "exotic sheaves", used in a recent proof of Lusztig's conjectures on a canonical bases for the Grothendieck group of Springer fibers by Bezrukavnikov and Mirković. The new construction is purely geometric, instead of using deep results from modular representation theory. This is joint work with Sabin Cautis.
Asher Auel
The torsion order of an algebraic variety
The minimal multiple of the diagonal to admit a decomposition in the sense of Bloch and Srinivas is called the torsion order of a smooth projective variety. It is bounded above by the greatest common divisor of the degrees of all unirational parameterizations, and is a stable birational invariant. Recently, a degeneration method initiated by Voisin, and developed by Colliot-Thélène and Pirutka, has led to a breakthrough in establishing lower bounds for the torsion order, hence obstructions to stable rationality. The power of this method lies in its mix of inputs from algebraic cycles, Hodge theory, algebraic K-theory, birational geometry, and singularity theory. I will survey the state of the art of this theory, which includes recent work of Chatzistamatiou and Levine, as well as provide some new examples.
The derived Maurer-Cartan locus
We give a new definition of the derived Maurer-Cartan locus MC^*(L), as a functor from differential graded Lie algebras to cosimplicial schemes, whose definition is sufficiently straightforward that it generalizes well to other settings such as analytic geometry. If L is differential graded Lie algebra, let L_+ be the truncation of L in positive degrees i>0. We prove that the differential graded algebra of functions on the cosimplicial scheme MC^*(L) is quasi-isomorphic to the Chevalley-Eilenberg complex of L_+, which is the usual definition of the derived Maurer-Cartan locus in characteristic zero.
Matthew Satriano
An Introduction to Toric Stacks, and Conjectures in Cycle Theory
We will not assume any prior knowledge of stacks for this talk. Toric stacks, like toric varieties, form a concrete class of examples which are particularly amenable to computation. We give an introduction to the subject and explain how we have used toric stacks to obtain an unexpected result in cycle theory. We end the talk by discussing some conjectures recently introduced by myself and Dan Edidin.
Elliptic Calabi-Yau threefolds and Jacobi forms via derived categories
By physical considerations Huang, Katz and Klemm conjecture that the generating series of Donaldson-Thomas invariants of an elliptic Calabi-Yau threefold is a Jacobi form. In this talk I will explain a mathematical approach to proving part of their conjecture. The method uses an autoequivalence of the derived category, and wallcrossing techniques developed by Toda. This leads to strong structure results for curve counting invariants. As a leading example we will discuss the elliptic fibration over P2 in degree 1.
This is joint work with Junliang Shen.
Pablo Solis
Compactifications and Gauged Gromov-Witten Theory
I will give an introduction to gauged Gromov-Witten theory. The theory naturally leads to studying compactifications of the moduli space of G bundles on nodal curves, which I'll discuss briefly. Then I'll focus on a version of gauged Gromov-Witten theory developed by Woodward and Gonzalez and I'll present a theorem which is joint work with Woodward and Gonzalez on the properness of the moduli of scaled gauged maps satisfying a stability condition introduced by Mundet and Schmitt.
Matthew Woolf
Mordell-Weil Groups of Hitchin Systems
In this talk, I will discuss work giving in many cases a complete description of the group of rational sections of the relative Jacobian of a linear system of curves on a surface. By specializing to the case of spectral curves, we are able to determine very explicitly the group of sections of the Hitchin fibration. We will also discuss work in progress to extend this work to principal G-Higgs bundles for more general groups G.
Emily Clader
Double Ramification Cycles and Tautological Relations
Tautological relations are certain equations in the Chow ring of the moduli space of curves. I will discuss a family of such relations, first conjectured by A. Pixton, that arises by studying moduli spaces of ramified covers of the projective line. These relations can be used to recover a number of well-known facts about the moduli space of curves, as well as to generate very special equations known as topological recursion relations. This is joint work with various subsets of S. Grushevsky, F. Janda, X. Wang, and D. Zakharov.
Dustin Ross
Genus-One Landau-Ginzburg/Calabi-Yau Correspondence
First suggested by Witten in the early 1990's, the Landau-Ginzburg/Calabi-Yau correspondence studies a relationship between spaces of maps from curves to the quintic 3-fold (the Calabi-Yau side) and spaces of curves along with 5th roots of their canonical bundle (the Landau-Ginzburg side). The correspondence was put on a firm mathematical footing in 2008 when Chiodo and Ruan proved a precise statement for the case of genus-zero curves, along with an explicit conjecture for the higher-genus correspondence. In this talk, I will begin by describing the motivation and the mathematical formulation of the LG/CY correspondence, and I will report on recent work with Shuai Guo that verifies the higher-genus correspondence in the case of genus-one curves.
Goulwen Fichou
Rennes / PIMS
On Grothendieck rings in real geometry
The study of Grothendieck rings of varieties in the context of real algebraic geometry has begun since the apparition of motivic integration. Several such rings are of interest, depending notably on the class of functions we are interested in. We will discuss recent progress in the cases of real algebraic varieties and of arc-symmetric sets.
Amos Turchet
Fiber Powers and Uniformity
In 1997 Caporaso, Harris and Mazur, motivated by uniformity results in Diophantine Geometry, proposed a conjecture about fibered powers of families of varieties of general type. In particular they conjecture that, if X -> B is a family whose general fiber is a variety of general type, then for large n, the n-th fiber power of X over B dominates a variety of general type. The conjecture has been proved by Abramovich and used to deduce interesting results on the distribution of rational points on projective varieties. I will discuss recent work, joint with Kenny Ascher (Brown), generalizing this to pairs (X,D) of a projective scheme and a divisor, and the new challenges that arise when one tries to obtain analogous results for the distribution of integral points in quasi projective varieties.
François Greer
Noether-Lefschetz Theory and Elliptic CY3's
The Hodge theory of surfaces provides a link between enumerative geometry and modular forms, via the cohomological theta correspondence. I will present an approach to studying the Gromov-Witten invariants of Weierstrass fibrations over P^2, proving part of a conjectural formula coming from topological string theory.
Donghai Pan
Galois cyclic covers of the projective line and pencils of Fermat hypersurfaces
Classically, there are two objects that are particularly interesting to algebraic geometers: hyperelliptic curves and quadrics. The connection between these two seemingly unrelated objects was first revealed by M. Reid, which roughly says that there's a correspondence between hyperelliptic curves and pencil of quadrics. I'll give a brief review of Reid's work and then describe a higher degree generalization of the correspondence.
Michel Brion
University of Grenoble
Algebraic group actions on normal varieties
Let G be a connected algebraic k-group acting on a normal k-variety, where k is a field. We will show that X is covered by open G-stable quasi-projective subvarieties; moreover, any such subvariety admits an equivariant embedding into the projectivization of a G-linearized vector bundle on an abelian variety A, where A is a quotient of G. This generalizes a classical result of Sumihiro for actions of smooth connected algebraic groups.
Immanuel Halupczok
University of Düsseldorf
Wed 15 Mar 2017, 3:00pm
Singularities and counting points
Wed 15 Mar 2017, 3:00pm-4:00pm
Given a variety X defined over Z, there are two problems which a priori seem to not have a lot to do with each other:
1. Describe the singularities of the complex variety X(C)
2. Fix a prime p and describe how the cardinality of X(Z/p^rZ) depends on r.
A surprising result from the 80s concerning the second problem is that the Poincaré series of X - a formal power series having the above cardinalities as coefficients - is a rational function.
In this talk, I will explain this in more detail and I will present a new notion of stratifications which contributes to both problems: On the one hand, such a stratification specializes to a stratification of X(C) (which has stronger regularity properties than classical Whitney stratifications); on the other hand, using those stratifications, one obtains a (new) geometric proof of the rationality of Poincare series.
Sarah Mayes-Tang
Several patterns emerge in collections of Betti tables associated to the powers of a fixed ideal. For example, Wheildon and others demonstrated that the shapes of the nonzero entires of these tables eventually stabilize when the fixed ideal has generators of the same degree. In this talk, I will discuss patterns in the graded Betti numbers of these and other graded systems of ideals. In particular, I will describe ways in which the Betti tables may stabilize, and how different types of stabilization are reflected in the corresponding Boij-Soederberg decompositions.
Riemann-Hilbert problems in Donaldson-Thomas theory
Recently Bridgeland has introduced the notion of a BPS structure, which is meant to encode the output of unrefined Donaldson-Thomas theory. He studied an associated Riemann-Hilbert problem and found a relation with Gromov-Witten invariants in the case of the conifold. In this talk I will try to give an overview of this work, ending with some potential new directions to explore.
Fei Hu
Dynamics on automorphism groups of compact Kähler manifolds
Given a compact Kähler manifold X and a biholomorphic self-map g of X, the topological entropy of g plays an important role in the study of dynamical system (X, g). In this talk, I first talk about a generalization of a surface result, that is, a parabolic automorphism of a compact Kähler surface preserves an elliptic fibration, to hyperkähler manifolds. We give a criterion for the existence of equivariant fibrations on 'certain' hyperkähler manifolds from a dynamical viewpoint. Next, I will generalize a finiteness result for the null-entropy subset of a commutative automorphism group due to Dinh–Sibony (2004), to arbitrary virtually solvable groups G of maximal dynamical rank. This is based on joint work with T.-C. Dinh, J. Keum, and D.-Q. Zhang.
Eugene Gorsky
Knot invariants and Hilbert schemes
I will discuss some recent results and conjectures relating knot invariants (such as HOMFLY-PT polynomial and Khovanov-Rozansky homology) to algebraic geometry of Hilbert schemes of points on the plane. All notions will be introduced in the talk, no preliminary knowledge is assumed. This is a joint work with Andrei Negut and Jacob Rasmussen.
A power structure over the Grothendieck ring of geometric dg categories
The notion of a power structure is closely related to that of a lambda ring. It is a powerful way to encode operations on certain generating functions. Gusein-Zade, Luengo, and Melle-Hernandez have defined a power structure over the Grothendieck ring of varieties. I will discuss an analog of this on a version of the Grothendieck ring of pretriangulated categories, whose elements represent enhancements of derived categories of coherent sheaves on varieties.
Samuel Bach
Derived algebraic geometry and L-theory
L-theory is often dubbed as "the K-theory" of quadratic forms. It has been used in a crucial way in surgery theory, to determine if two manifolds are cobordant. I will explain how it is easily defined in the derived setting by considering "derived" quadratic forms, and how I have used derived algebraic geometry to prove a rigidity result for L-theory. This will give an application of derived methods to a non-derived problem.
Klaus Altmann
Infinitesimal qG-deformations of cyclic quotient singularities
The subject of the talk is two-dimensional cyclic quotients, i.e. two-dimensional toric singularities. We introduce the classical work of Koll'ar/Shephard-Barron relating the components of their deformations and the so-called P-resolutions, we give several combinatorial descriptions of both gadgets, and we will focus on two special components among them - the Artin component allowing a simultaneous resolution and the qG-deformations preserving the Q-Gorenstein property. That is, it becomes important that several (or all) reflexive powers of the dualizing sheaf fit into the deformation as well. We will study this property in dependence on the exponent r. While the answers are already known for deformations over reduced base spaces (char = 0), we will now focus on the infinitesimal theory. (joint work with János Kollár)
SNS Pisa
Chow rings of some stacks of smooth curves
There is by now an extensive theory of rational Chow rings of stacks of smooth curves. The integral version of these Chow rings is not as well understood. I will survey what is known, including some recent developments.
Donaldson-Thomas invariants of the banana manifold and elliptic genera.
The Banana manifold (or bananafold for short), is a compact Calabi-Yau threefold X which fibers over P^1 with Abelian surface fibers. It has 12 singular fibers which are non-normal toric surfaces whose torus invariant curves are a banana configuration: three P^1's joined at two points, each of which locally look like the coordinate axes in C^3. We show that the Donaldson-Thomas partition function of X (for curve classes in the fibers) has an explicit product formula which, after a change of variables is the same as the generating function for the equivariant elliptic genera of Hilb(C^2), the Hilbert scheme of points in the plane.
The Theory of Resolvent Degree - After Hamilton, Sylvester, Hilbert, Segre and Brauer
Resolvent degree is an invariant of a branched cover which quantifies how "hard" is it to specify a point in the cover given a point under it in the base. Historically, this was applied to the branched cover P^n/S_{n-1} -> P^n/S_n, from the moduli of degree n polynomials with a specified root to the moduli of degree n polynomials. Classical enumerative problems and congruence subgroups provide two additional sources of branched covers to which this invariant applies. In ongoing joint work with Benson Farb, we develop the theory of resolvent degree as an extension of Buhler and Reichstein's theory of essential dimension. We apply this theory to systematize an array of classical results and to relate the complexity of seemingly different problems such as finding roots of polynomials, lines on cubic surfaces, and level structures on intermediate Jacobians.
Hirotachi Abo
Equations for surfaces in projective four-space
This talk is concerned with the question of the minimal number of equations necessary to define a given projective variety scheme-theoretically. Every hypersurface is cut out by a single polynomial scheme-theoretically (also set-theoretically and ideal theoretically). Therefore the question is more interesting if a variety has a higher codimension. In this talk, we focus on the case when the codimension is two. If a variety in projective n-space has codimension two, then the minimal number of polynomials necessary to cut out the variety scheme-theoretically is between 2 and n+1. However the varieties cut out by fewer than n+1, but more than 2 polynomials seem very rare. The main goal of this talk is to discuss conditions for a non-singular surface in projective four-space to be cut out by three polynomials.
Shamil Asgarli
Tue 21 Nov 2017, 4:00pm SPECIAL
The Picard group of the moduli of smooth complete intersections of two quadrics
We study the moduli space of smooth complete intersections of two quadrics by relating it to the geometry of the singular members of the corresponding pencil. We give a new description for this parameter space by using the fact that two quadrics can be simultaneously diagonalized. Using this description we can compute the Picard group, which always happens to be cyclic. For example, we show that the Picard group of the moduli stack of smooth degree 4 Del Pezzo surfaces is Z/4Z.
This is a joint work with Giovanni Inchiostro.
Frank Sottile
Thu 23 Nov 2017, 4:00pm SPECIAL
Irrational Toric Varieties
Thu 23 Nov 2017, 4:00pm-5:00pm
Classical toric varieties come in two flavours: Normal toric varieties are given by rational fans in R^n. A (not necessarily normal) affine toric variety is given by finite subset A of Z^n. When A is homogeneous, it is projective. Applications of mathematics have long studied the positive real part of a toric variety as the main object, where the points A may be arbitrary points in R^n. For example, in 1963 Birch showed that such an irrational toric variety is homeomorphic to the convex hull of the set A.
Recent work showing that all Hausdorff limits of translates of irrational toric varieties are toric degenerations suggested the need for a theory of irrational toric varieties associated to arbitrary fans in R^n. These are R^n_>-equivariant cell complexes dual to the fan. Among the pleasing parallels with the classical theory is that the space of Hausdorff limits of the irrational projective toric variety of a finite set A in R^n is homeomorphic to the secondary polytope of A.
Daniel Litt
Arithmetic representations of fundamental groups
Let X be an algebraic variety over a field k. Which representations of pi_1(X) arise from geometry, e.g. as monodromy representations on the cohomology of a family of varieties over X? We study this question by analyzing the action of the Galois group of k on the fundamental group of X, and prove several fundamental structural results about this action.
As a sample application of our techniques, we show that if X is a normal variety over a field of characteristic zero, and p is a prime, then there exists an integer N=N(X,p) such that any non-trivial p-adic representation of the fundamental group of X, which arises from geometry, is non-trivial mod p^N.
Joontae Kim
Mon 8 Jan 2018, 4:00pm SPECIAL
PIMS 4127
Wrapped Floer homology of real Lagrangians and volume growth of symplectomorphisms
Floer homology has been a central tool to study global aspects of symplectic topology, which is based on pseudoholomorphic curve techniques proposed by Gromov. In this talk, we introduce a so-called wrapped Floer homology. Roughly speaking, this is a certain homology generated by intersection points of two Lagrangians and its differential is given by counting solutions to perturbed Cauchy-Riemann equation. We investigate an entropy-type invariant, called the slow volume growth, of certain symplectomorphisms and give a uniform lower bound of the growth using wrapped Floer homology. We apply our results to examples from real symplectic manifolds, including A_k-singularities and complements of a complex hypersurface. This is joint work with Myeonggi Kwon and Junyoung Lee.
Nicolas Addington
Mon 15 Jan 2018, 5:00pm SPECIAL
Math 126 (note special time)
CANCELED - Exoflops
Consider a contraction pi: X -> Y from a smooth Calabi-Yau 3-fold to a singular one. (This is half of an "extremal transition;" the other half would be a smoothing of Y.) In many examples there is an intermediate object called an "exoflop" -- a category of matrix factorizations, derived-equivalent to X, where the critical locus of the superpotential looks like Y with a P^1 sticking out of it, and objects of D(X) that will be killed by pi_* correspond to objects supported at the far end of the P^1. I will discuss one or two interesting examples. This is joint work with Paul Aspinwall.
Steven Rayan
Asymptotic geometry of hyperpolygons
Nakajima quiver varieties lie at the interface of geometry and representation theory. I will discuss a particular example, hyperpolygon space, which arises from star-shaped quivers. The simplest of these varieties is a noncompact complex surface admitting the structure of an "instanton", and therefore fits nicely into the Kronheimer-Nakajima classification of ALE hyperkaehler 4-manifolds, which is a geometric realization of the McKay correspondence for finite subgroups of SU(2). For more general hyperpolygon spaces, we speculate on how this classification might be extended by studying the asymptotic geometry of the variety. In moduli-theoretic terms, this involves driving the stability parameter for the quotient to an irregular value. This is joint work in progress with Harmut Weiss, building on previous work with Jonathan Fisher.
Federico Scavia
Essential dimension of representations of algebras
The essential dimension of an algebraic object is the minimal number of independent parameters one needs to define it. I will explain how the representation type of a finitely-generated algebra (finite, tame, wild) is determined by the essential dimension of the functors of its n-dimensional representations and I will introduce new numerical invariants for algebras. I will then illustrate the theorem and explicitely determine the invariants in the case of quiver algebras.
Sun 4 Feb 2018, 4:00pm
Motivic classes of algebraic groups
Sun 4 Feb 2018, 4:00pm-5:00pm
A birational Gabriel's theorem (joint w/ J. Calabrese).
A famous theorem by Gabriel asserts that two Noetherian schemes X, Y are isomorphic if and only if the categories Coh(X), Coh(Y) are isomorphic. This theorem has been extended in many directions, including algebraic spaces and stacks (if we consider the monoid structure given by tensor product). One more idea to extend the theorem is the following: let X be a scheme of finite type over a field k, and consider the subcategory of Coh(X) given by sheaves supported in dimension at most d-1. We can form the quotient of Coh(X) by this subcategory, which we will call C_d(X). This category should contain enough information to describe the geometry of X "up to subsets of dimension d-1". In a joint work in progress with John Calabrese, we show that this is indeed true, i.e. to any isomorphism f: C_d(Y) ---> C_d(X) we can associate an isomorphism f': U---> V, where U and V are open subset respectively of X and Y whose complement have dimension at most d-1. Additionally, this isomorphism is unique up to subsets of dimension at most d-1. As a corollary of this result, we show that the automorphisms of C_d(X) are in bijection with the set {"automorphisms of X up to subsets of dimension d-1"} x {"line bundles on X up to subsets dimension d-1"}.
Mon 26 Feb 2018, 4:00pm SPECIAL
Fujita's Freeness Conjecture for Complexity-One T-Varieties
Fujita famously conjectured that for a d-dimensional smooth projective variety X with ample divisor H, mH+K_X is basepoint free whenever m\geq d+1. I will discuss recent joint work with Klaus Altmann in which we show this conjecture is true whenever X admits an effective action by a torus of dimension d-1.
Quotients of algebraic varieties
In this talk, we will address the following question: given an algebraic group G acting on a variety X, when does the quotient X/G exist? We will provide an answer to this question in the case that G is reductive by giving necessary and sufficient conditions for the quotient to exist. We will discuss various applications to equivariant geometry, moduli problems and Bridgeland stability.
Nikita Karpenko
On generic flag varieties of Spin(11) and Spin(12)
Let X be the variety of Borel subgroups of a split semisimple algebraic group G over a field, twisted by a generic G-torsor. Conjecturally, the canonical epimorphism of the Chow ring CH(X) onto the associated graded ring GK(X) of the topological filtration on the Grothendieck ring K(X) is an isomorphism. We prove new cases G=Spin(11) and G=Spin(12) of this conjecture. On an equivalent note, we compute the Chow ring CH(Y) of the highest orthogonal grassmannian Y for the generic 11- and 12-dimensional quadratic forms of trivial discriminant and Clifford invariant. In particular, we describe the torsion subgroup of the Chow group CH(Y) and determine its order which is equal to 16 777 216. On the other hand, we show that the Chow group of 0-cycles on Y is torsion-free.
Mon 19 Mar 2018, 4:00pm SPECIAL
The quantum Hikita conjecture
The Hikita conjecture relates the cohomology ring of a symplectic resolution to the coordinate ring of another such resolution. I will explain this conjecture, and present a new version of the conjecture involving the quantum cohomology ring. There will be an emphasis on explicit examples.
Jeff Achter
Wed 21 Mar 2018, 4:00pm SPECIAL
Algebraic Geometry Seminar / Number Theory Seminar
Algebraic intermediate Jacobians are arithmetic
Consider a smooth projective variety over a number field. The image of the associated (complex) Abel--Jacobi map inside the (transcendental) intermediate Jacobian is an abelian variety. We show that this abelian variety admits a distinguished model over the number field. Among other applications, this tool allows us to answer a recent question of Mazur; recover an old result of Deligne; and give new constructions of period maps over arithmetic bases.
Carl Mautner
Springer theory and hypertoric varieties
The nilpotent cone has very special geometry which encodes interesting representation theoretic information. It is expected that many of its special properties have analogues for general "symplectic singularities." This talk will discuss one such analogy for a class of symplectic singularities called hypertoric varieties. The main result, joint with T. Braden, can be described as a duality between nearby and vanishing cycle sheaves on Gale dual hypertoric varieties.
Frances Kirwan
Hyperkahler implosion
Abstract: The hyperkahler quotient construction (introduced by Hitchin
et al in the 1980s) allows us to construct new hyperkahler spaces from
suitable group actions on hyperkahler manifolds. This construction is an
analogue of symplectic reduction (introduced by Marsden and Weinstein in
the 1970s), and both are closely related to the quotient construction
for complex reductive group actions in algebraic geometry provided by
Mumford's geometric invariant theory (GIT). Hyperkahler implosion is in
turn an analogue of symplectic implosion (introduced in a 2002 paper of
Guillemin, Jeffrey and Sjamaar) which is related to a generalised
version of GIT providing quotients for non-reductive group actions in
algebraic geometry.
University of Hong Kong
Mon 20 Aug 2018, 4:00pm SPECIAL
Noncommutative Mather-Yau theorem and its applications
Mon 20 Aug 2018, 4:00pm-5:00pm
We prove that the right equivalence class of a super potential in complete free algebra is determined by its Jacobi algebra and the canonical class in its 0-th Hochschild homology represented by the super potential, assuming the Jacobi algebra is finite dimensional. This is a noncommutative version of the famous Mather-Yau theorem in isolated hyper surface singularities. As a consequence, we prove a rigidity theorem for Ginzburg dg-algebra. I will discuss some applications of these results in three dimensional birational geometry. This is a joint work with Guisong Zhou (1803.06128).
Elana Kalashnikov
Four dimensional Fano quiver flag zero loci
The classification of Fano varieties is unknown beyond dimension 3; however, many Fano fourfolds are expected to be GIT theoretic subvarieties of either toric varieties or quiver flag varieties. Quiver flag varieties are a generalization of type A flag varieties and are GIT quotients of vector spaces. In this talk, I will discuss my recent work on quiver flag varieties, including a proof of the Abelian/non-Abelian correspondence for quiver flag varieties, and its application in the large scale computer search for Fano fourfolds that I have carried out in joint work with T. Coates and A. Kasprzyk. We find 139 new Fano fourfolds. I will also discuss the importance of these subvarieties as a testing ground for the conjectures of Coates, Corti, Galkin, Golyshev, Kasprzyk and Tveiten on mirror symmetry for Fano varieties.
Burt Totaro
Thu 13 Sep 2018, 4:00pm SPECIAL
Hodge theory of classifying stacks
Thu 13 Sep 2018, 4:00pm-5:00pm
The goal of this talk is to create a correspondence between the representation theory of algebraic groups and the topology of Lie groups. The idea is to study the Hodge theory of the classifying stack of a reductive group over a field of characteristic p, the case of characteristic 0 having been studied by Behrend, Bott, Simpson and Teleman. The approach yields new calculations in representation theory, motivated by topology.
Interpolating between the Batyrev--Manin and Malle Conjectures
The Batyrev--Manin conjecture gives a prediction for the asymptotic growth rate of rational points on varieties over number fields when we order the points by height. The Malle conjecture predicts the asymptotic growth rate for number fields of degree d when they are ordered by discriminant. The two conjectures have the same form and it is natural to ask if they are, in fact, one and the same. We develop a theory of point counts on stacks and give a conjecture for their growth rate which specializes to the two aforementioned conjectures. This is joint work with Jordan Ellenberg and David Zureick-Brown.
Alexei Oblomkov
University of Massachusets
Knot invariants, Hilbert schemes and arc spaces
In my talk I will explain (partially conjectural) relation between
1) Homology of Hilbert scheme of points on singular curves
2) Knot homology of the links of curve singularities
3) Space functions on the moduli space of maps from the formal disc to the curve singularities.
I will center my talk around discussion of the case of cuspidal curve x^m=y^n and its singularity. In this case it is now known that 1) 2) and 3) are essentially equal. Talk is based on the joint projects with Gorsky, Rozansky, Rasmussen, Shende and Yun.
The Conormal Variety of a Schubert Variety
Let N be the conormal variety of a Schubert variety X. In this talk, we discuss the geometry of N in two cases, when X is cominuscule, and when X is a divisor. In particular, we present a resolution of singularities and a system of defining equations for N, and also describe certain cases when N is normal, Cohen-Macaulay, and Frobenius split. Time permitting, we will also illustrate the close relationship between N and orbital varieties, and discuss the consequences of the above constructions for orbital varieties.
Exoflops
The derived category of a hypersurface is equivalent to the
category of matrix factorizations of a certain function on the total space
of a line bundle over the ambient space. The hypersurface is smooth if
and only if the critical locus of the function is compact. I will present
a construction through which a resolution of singularities of the
hypersurface corresponds to a compactification of the critical locus of
the function, which can be very interesting in examples. This is joint
work with Paul Aspinwall and Ed Segal.
Toni Annala
Bivariant Theories and Algebraic Cobordism of Singular Varieties
I will outline the construction of a natural bivariant theory extending algebraic bordism, which will yield an extension of algebraic cobordism to singular varieties. I will also discuss the connections of this theory to algebraic K-theory and to intersection theory.
Jeremy Usatine
Hyperplane arrangements and compactifying the Milnor fiber
Milnor fibers are invariants that arise in the study of hypersurface singularities. A major open conjecture predicts that for hyperplane arrangements, the Betti numbers of the Milnor fiber depend only on the combinatorics of the arrangement. I will discuss how tropical geometry can be used to study related invariants, the virtual Hodge numbers of a hyperplane arrangement's Milnor fiber. This talk is based on joint work with Max Kutler.
Fenglong You
Relative and orbifold Gromov-Witten theory
Given a smooth projective variety X and a smooth divisor D \subset X, one can study the enumerative geometry of counting curves in X with tangency conditions along D. There are two theories associated to it: relative Gromov-Witten invariants of (X,D) and orbifold Gromov-Witten invariants of the r-th root stack X_{D,r}. For sufficiently large r, Abramovich-Cadman-Wise proved that genus zero relative invariants are equal to the genus zero orbifold invariants of root stacks (with a counterexample in genus 1). We show that higher genus orbifold Gromov-Witten invariants of X_{D,r} are polynomials in r and the constant terms are exactly higher genus relative Gromov-Witten invariants of (X,D). If time permits, I will also talk about further results in genus zero which allows us to study structures of genus zero relative Gromov-Witten theory, e.g. Givental formalism for genus zero relative invariants. This is based on joint work with Hisan-Hua Tseng, Honglu Fan and Longting Wu.
Dori Bejleri
Motivic Hilbert zeta functions of curves
The Grothendieck ring of varieties is the target of a rich invariant associated to any algebraic variety which witnesses the interplay between geometric, topological and arithmetic properties of the variety. The motivic Hilbert zeta function is the generating series for classes in this ring associated to a certain compactification of the unordered configuration space, the Hilbert scheme of points, of a variety. In this talk I will discuss the behavior of the motivic Hilbert zeta function of a reduced curve with arbitrary singularities. For planar singularities, there is a large body of work detailing beautiful connections with enumerative geometry, representation theory and topology. I will discuss some conjectural extensions of this picture to non-planar curves.
Dan Edidin
Mon 26 Nov 2018, 3:00pm SPECIAL
MATH 126 Please note unusual start time (3:00pm)
Saturated blowups and canonical reduction of stabilizers
We introduce a construction call the {\em saturated blowup} of an Artin stack with good moduli space. The saturated blowup is a birational map of stacks which induces a proper birational map on good moduli spaces. Given an Artin stack {\mathcal X} with good moduli space X, there is a canonical sequence of saturated blowups which produces a stack whose rigidification is a DM stack. When the stack is smooth, all of the stacks in the sequence of saturated blowups are also smooth. This construction generalizes earlier work of Kirwan and Reichstein in geometric invariant theory and the talk is based on joint work with David Rydh.
Seidon Alsaody
Exceptional Groups and Exceptional Algebras
Exceptional groups (over arbitrary rings) are related to octonion algebras, triality and exceptional Jordan algebras. I will talk about recent results of an approach to these objects using certain torsors (principal homogeneous spaces) under smaller exceptional groups, and explain how an explicit understanding of these torsors provides insight into the objects and their interrelations.
Farbod Shokrieh
Heights and tropical geometry
Given a principally polarized abelian variety A over a number field (or a function field), one can naturally extract two real numbers that capture the ``complexity'' of A: one is the Faltings height and the other is the N\'eron-Tate height (of a symmetric effective divisor defining the polarization). I will discuss a precise relationship between these two numbers, relating them to some subtle invariants arising from tropical geometry (more precisely, from Berkovich analytic spaces).
(Joint work with Robin de Jong.)
Kevin Casto
Representation theory and arithmetic statistics of generalized configuration spaces
Church-Ellenberg-Farb introduced the theory of FI-modules to explain the phenomenon of representation stability of the cohomology of configuration spaces. I will explain the basics of how this story goes, and then explain how to extend their analysis to two generalized types of configuration spaces. Furthermore, I will explain how the Grothendieck-Lefschetz formula connects these topological stability phenomena to stabilization of statistics for polynomials and rational maps over finite fields.
Alexander Neshitov
Motivic decompositions of homogeneous spaces and representations of Hecke type algebras
This is a joint work with B. Calmes, V. Petrov, N. Semenov and K. Zainoulline. In the talk I will discuss a connection between direct sum decompositions of the Chow motive with Z-coefficients of a homogeneous space of a group G, and representations of affine nil Hecke algebras defined in terms of root system of G. This connnection can be used in two directions: prove indecomposability of certain motives as well as get some structural results about Hecke algebras.
Rostislav Devyatov
Multiplicity-free products of Schubert divisors
Let G/B be a flag variety over C, where G is a simple algebraic group with a simply laced Dynkin diagram, and B is a Borel subgroup. The Bruhat decomposition of G defines subvarieties of G/B called Schubert subvarieties. The codimension 1 Schubert subvarieties are called Schubert divisors. The Chow ring of G/B is generated as an abelian group by the classes of all Schubert varieties, and is "almost" generated as a ring by the classes of Schubert divisors. More precisely, an integer multiple of each element of G/B can be written as a polynomial in Schubert divisors with integer coefficients. In particular, each product of Schubert divisors is a linear combination of Schubert varieties with integer coefficients.
In the first part of my talk I am going to speak about the coefficients of these linear combinations. In particular, I am going to explain how to check if a coefficient of such a linear combination is nonzero and if such a coefficient equals 1. In the second part of my talk, I will say something about an application of my result, namely, how it makes it possible estimate so-called canonical dimension of flag varieties and groups over non-algebraically-closed fields.
The Grothendieck ring of algebraic stacks was introduced by Ekedahl in 2009. It may be viewed as a localization of the more common Grothendieck ring of varieties. If G is a finite group, then the class {BG} of its classifying stack BG is equal to 1 in many cases, but there are examples for which {BG}\neq 1. When G is connected, {BG} has been computed in many cases in a long series of papers, and it always turned out that {BG}*{G}=1. We exhibit the first example of a connected group G for which {BG}*{G}\neq 1. As a consequence, we produce an infinite family of non-constant finite étale group schemes A such that {BA}\neq 1.
Cohomological and numerical dynamical degrees on abelian varieties
In 2013, Esnault and Srinivas proved that as in the de Rham cohomology over the field of complex numbers, the algebraic entropy of an automorphism of a smooth projective surface over a finite field $\mathbb{F}_q$ is taken on the span of the Néron–Severi group inside of $\ell$-adic cohomology. Later, motivated by this and Weil's Riemann Hypothesis, Truong asked whether the spectral radius $\chi_{2k}(f)$ of the pullback $f^*: H^{2k}(X, \mathbb{Q}_\ell) \to H^{2k}(X, \mathbb{Q}_\ell)$ is the same as the spectral radius $\lambda_k(f)$ of the pullback $f^*: N^k(X)_\mathbb{R} \to N^k(X)_\mathbb{R}$, where $f: X \to X$ is a surjective self-morphism of a smooth projective variety $X$ of dimension $n$ defined over an algebraically closed field $\mathbb{k}$ and $N^k(X)$ denotes the finitely generated abelian group of algebraic $(n-k)$-cycles modulo the numerical equivalence. He has shown that $\displaystyle \max_{0\le i\le 2n} \chi_{i}(f) = \max_{0\le k\le n} \lambda_{k}(f)$. We give an affirmative answer to his question in the case of abelian varieties and $k=1$.
The motivic weight of the stack of bundles
I will talk about a new approach to computing the motivic weight of the stack of G-bundles on a curve. The idea is to associate a motivic weight to certain ind-schemes, such as the affine Grassmannian and the scheme of maps X -> G, where X is an affine curve, using Bittner's calculus of 6 operations. I hope that this will eventually lead to a proof of a conjectural formula for the motivic weight of the stack of bundles in terms of special values of Kapranov's zeta function.
On a conjecture of Voisin
C. Voisin proved that no two distinct points on a very
general surface of degree $\ge 7$ in ${\mathbb P}^3$ are rationally
equivalent. She conjectured that the same holds for a very general
sextic surface. We settled this conjecture by improving her method
which makes use of the global jet spaces. This is a joint work with
James D. Lewis and Mao Sheng.
Sebastian Casalaina-Martin
Distinguished models of intermediate Jacobians
In this talk I will discuss joint work with J. Achter and C. Vial showing that the image of the Abel--Jacobi map on algebraically trivial cycles descends to the field of definition for smooth projective varieties defined over subfields of the complex numbers. The main focus will be on applications to topics such as: descending cohomology geometrically, a conjecture of Orlov regarding the derived category and Hodge theory, and motivated admissible normal functions.
Abhishek Kumar Shukla
Minimal number of generators of an étale algebra
O. Forster proved that over a ring R of Krull dimension d a finite module M of rank at most n can be generated by n+d elements. Generalizing this in great measure U. First and Z. Reichstein showed that any finite R-algebra A can be generated by n+d elements if each A\otimes_R k(\mathfrak{p}), for \mathfrak{p}\in \mathrm{MaxSpec}(R), is generated by n elements. It is natural to ask if the upper bounds can be improved. For modules over rings R. Swan produced examples to match the upper bound. Recently B. Williams obtained weaker lower bounds in the context of Azumaya algebras. In this paper we investigate this question for étale algebras. We show that the upper bound is indeed sharp. Our main result is a construction of universal varieties for degree-2 étale algebras equipped with a set of r generators and explicit examples realizing the upper bound of First & Reichstein. This is joint work with Ben Williams.
Mon 8 Jul 2019, 2:30pm SPECIAL
PIMS Lounge
Noncommutative differential calculus and Calabi-Yau geometry
Mon 8 Jul 2019, 2:30pm-3:30pm
Quivers with potential appear naturally in the study of the deformation theory of objects in 3D Calabi-Yau categories, for example the deformation of vector bundles on 3D Calabi-Yau manifolds. They provide a deep link between geometry of Calabi-Yau manifolds to some aspects of representation theory, for example cluster algebras, quantum enveloping algebras, etc. In this talk, I will survey some recent progress in non commutative differential calculus of quivers with potentials, and show how this leads to new results in birational geometry and Donaldson-Thomas theory.
Scuola Normale Superiore
The Chow ring of the stack of stable curves of genus 2
There is by now an extensive theory of rational Chow rings of moduli spaces of smooth curves. The integral version of these Chow rings is not as well understood. I will survey what is known. In the last part of the talk I will discuss the Chow ring of the stack of stable curves of genus 2, which has been recently calculated by Eric Larson. I will present a different approach to the calculation, which offers an interesting point of view on stack of stable curves of genus 2. This part is joint work with Andrea Di Lorenzo.
Uriya First
Thu 15 Aug 2019, 3:30pm SPECIAL
Involutions of the third kind
Let K be a field, let t : K à K be an automorphism of order 1 or 2. Let F denote the subfield of t-invariant elements in K. Then either K=F or K/F is a quadratic Galois extension. Given a central simple K-algebra A, a t-involution of A is an anti-automorphism s: A à A satisfying s2 = id_A and which restricting to t on the center K. The involution s is said to be of the first kind if K=F and of the second kind if K/F is quadratic Galois. A classical theorem of Albert gives a necessary and sufficient for A to have a t-involution.
Suppose now that R is a commutative ring, t: R à R is an automorphism of order 1 or 2 and S is the fixed subring of t. Over R, the role of central simple algebras is played by an Azumaya R-algebra. In this context, Albert's theorem fails, but Saltman showed that the condition given by Albert determines when an Azumaya algebra A is Brauer equivalent to another Azumaya algebra admitting a t-involution, provided S=R (first kind) or R/S is quadratic etale (second kind). This was extended to Azumaya algebras over schemes by Knus, Parimala and Srinivias.
I will discuss recent work with Ben Williams in which we treat the case where R is neither S nor a quadratic etale extension of S. (Our results also apply in the even more general context of locally ringed spaces.) In this case, the t-involutions can be regarded as being "of the third kind". This setting features new phenomena and raises interesting open questions.
Relevant definitions will be recalled during the talk.
Naoki Koseki
University of Tokyo
Birational geometry of the moduli spaces of coherent sheaves on blown-up surfaces
To study the difference between motivic invariants of the moduli spaces of coherent sheaves on a smooth surface and that on the blown-up surface, Nakajima-Yoshioka constructed a sequence of flip-like diagrams connecting these moduli spaces. In this talk, I will explain birational geometric property of Nakajima-Yoshioka's wall crossing diagram. It turned out that it realizes a minimal model program.
Ming Zhang
K-theoretic quasimap wall-crossing for GIT quotients
For a large class of GIT quotients X=W//G, Ciocan-Fontanine—Kim—Maulik and many others have developed the theory of epsilon-stable quasimaps. The conjectured wall-crossing formula of cohomological epsilon-stable quasimap invariants for all targets in all genera has been recently proved by Yang Zhou.
In this talk, we will introduce permutation-equivariant K-theoretic epsilon-stable quasimap invariants with level structure and prove their wall-crossing formulae for all targets in all genera. In particular, it will recover the genus-0 K-theoretic toric mirror theorem by Givental-Tonita and Givental, and the genus-0 mirror theorem for quantum K-theory with level structure by Ruan-Zhang. It is based on joint work in progress with Yang Zhou.
Charles Favre
École Polytechnique Paris
Degree growth of rational maps
The understanding of the growth of degrees of iterates of a rational self-map of a projective variety is a fundamental problem in holomorphic dynamics. I shall review some basic results of the theory and discuss some recent directions of research.
Categorical structure of Coulomb branches of 4D N=2 gauge theories
Coulomb branches have recently been given a good mathematical footing thanks to work of Braverman-Finkelberg-Nakajima. We will discuss their categorical structure. For concreteness we focus on the massless case which leads us to the category of coherent sheaves on the affine Grassmannian (the so called coherent Satake category).
This category is conjecturally governed by a cluster algebra structure. We will describe a solution to this conjecture in the case of general linear groups and discuss extensions of this result to more general Coulomb branches of 4D N=2 gauge theories. This is joint work with Harold Williams.
K3 surfaces with symplectic group actions, enumerative geometry, and modular forms
The Hilbert scheme parameterizing n points on a K3 surface X is a holomorphic symplectic manifold with rich properties. In the 90s it was discovered that the generating function for the Euler characteristics of the Hilbert schemes is related to both modular forms and the enumerative geometry of rational curves on X. We show how this beautiful story generalizes to K3 surfaces with a symplectic action of a group G. Namely, the Euler characteristics of the "G-fixed Hilbert schemes" parametrizing G-invariant collections of points on X are related to modular forms of level |G| and the enumerative geometry of rational curves on the stack quotient [X/G] . These ideas lead to some beautiful new product formulas for theta functions associated to root lattices. The picture also generalizes to refinements of the Euler characteristic such as chi_y genus and elliptic genus leading to connections with Jacobi forms and Siegel modular forms.
Alex Weekes
Mathematics, UBC
Coulomb branches of 3d N=4 theories
The Coulomb branches of 3d N=4 gauge theories were recently given a mathematical definition by Braverman, Finkelberg, and Nakajima. These very interesting algebraic varieties were already discussed in Sabin Cautis's talk a few weeks ago, but since they may be unfamiliar I will overview their definition and properties, and discuss some interesting examples. Finally, I will discuss my joint work with Nakajima where we give a generalization of the definition of Coulomb branches, which allows us to realize affine Grassmannian slices of all finite types.
Stephen Scully
On a generalization of Hoffmann's separation theorem for quadratic forms over fields
The problem of determining conditions under which a rational map can exist between a pair of twisted flag varieties plays an important general role in the study of linear algebraic groups and their torsors over arbitrary fields. A non-trivial special case arising in the algebraic theory of quadratic forms amounts to studying the extent to which an anisotropic quadratic form can become isotropic over the function field of a quadric. To this end, let p and q be a pair of anisotropic non-degenerate quadratic forms over a field, and let k be the dimension of the anisotropic part of q over the function field of the quadric p=0. We then make the general conjecture that the dimension of q must lie within k of an integer multiple of 2^{s+1}, where s is such that 2^s < \mathrm{dim}(p) \leq 2^{s+1}. This generalizes a well-known "separation theorem" of D. Hoffmann, bridging the gap between it and some other classical results on Witt kernels of function fields of quadrics. Note that the statement holds trivially if k \geq 2^s - 1. In this talk, I will discuss recent work that confirms the claim in the case where k \leq 2^{s-1} + 2^{s-2}, and more generally when \mathrm{dim}(p) > 2k - 2^{s-1}.
Clifton Cunningham
The geometry of Arthur packets for p-adic groups
Using an example to illustrate the process, I will explain how an Arthur parameter \psi for a p-adic group G determines a category P_\psi of equivariant perverse sheaves on a moduli space X_\psi of Langlands parameters for G and then how the microlocal perspective on P_\psi reveals the local Arthur packet \Pi_\psi attached to \psi . This talk will not assume you already know how to compute Arthur packets for p-adic groups but rather will show how to compute these things directly using geometric tools -- that's really one of the main points of this perspective. Joint with Andrew Fiori, Ahmed Moussaoui, James Mracek and Bin Xu.
Biregular Cremona transformations of the plane
We study the birational self-maps of the projective plane that induce bijections on the k-rational points for a given field k. These form a subgroup BCr_2(k) inside the Cremona group. The elements of BCr_2(k) are called Biregular Cremona transformations. We show that BCr_2(k) is not finitely-generated under a mild hypothesis on the field k. When k is a finite field, we study the possible permutations induced on the k-rational points of the plane. This is joint work with Kuan-Wen Lai, Masahiro Nakahara and Susanna Zimmermann.
Projective bundle formula in derived cobordism theory
I will introduce the universal precobordism theory, which generalizes algebraic cobordism of Levine-Morel to arbitrary quasi-projective schemes over a Noetherian base ring A. In the main part of the talk I will outline the proof of projective bundle formula for this new cohomology theory. The usual proof techniques based on resolution of singularities and weak factorization break down in this generality, so we have to use an alternative approach based on carefully studying the structure of precobordism rings of varieties with line bundles, which were inspired by a paper of Lee-Pandharipande. The talk is based on joint work with Shoji Yokura.
Yu-Hsiang Liu
Donaldson-Thomas theory for quantum Fermat quintic threefolds
In this talk, I will define Donaldson-Thomas type invariants for non-commutative projective Calabi-Yau-3 schemes whose associated graded algebras are finite over their centers. As an example, I will discuss the local structure of Hilbert schemes of points on the quantum Fermat quintic threefold, and compute some of its invariants.
Shuai Wang
Relative Gromov-Witten theory and vertex operators
We study the relative Gromov-Witten theory on T*P^1 \times P^1 and show that certain equivariant limits give us the relative invariants on P^1\times \P^1. By formulating the quantum multiplications on Hilb(T*P^1) computed by Davesh Maulik and Alexei Oblomkov as vertex operators and computing the product expansion, we demonstrate how to get the insertion and tangency operators computed by Yaim Cooper and Rahul Pandharipande in the equivariant limits.
Dylan G.L. Allegretti
Stability conditions and cluster varieties from surfaces
In low-dimensional geometry and topology, there is a classical construction that takes a holomorphic quadratic differential on a surface and produces a PGL(2)-local system. This construction provides a local homeomorphism from the moduli space of quadratic differentials to the moduli space of local systems. In this talk, I will propose a categorical generalization of this construction. In this generalization, the space of quadratic differentials is replaced by a complex manifold parametrizing Bridgeland stability conditions on a certain 3-Calabi-Yau triangulated category, while the space of local systems is replaced by a cluster variety. I will describe a local homeomorphism from the space of stability conditions to the cluster variety in a large class of examples and explain how it preserves the structures of these two spaces.
Chi-Yu Cheng
Variation of Instability in Invariant Theory
Mumford's GIT quotient is one way to construct moduli spaces that parametrize classes of algebro-geometric objects. It turns out there is an interesting structure on the set of unstable points discarded in the GIT quotients. In this talk I would aim to describe:
1. the stratification of the unstable points and its variation caused by different choices of linearizations;
2. a wall and chamber decomposition analogous to Variation of Geometric Invariant Theory Quotient;
3. examples and results in the case of projective toric varieties.
Nguyen-Bac Dang
Spectral gap in the dynamical degrees of tame automorphism preserving an affine quadric threefold
In this talk, I will present the tame automorphisms group preserving an affine quadric threefold. The main focus of my talk is the understanding of the degree sequences induced by the elements of this group. Precisely, I will explain how one can apply some ideas from geometric group theory in combination with valuative techniques to show that the values of the dynamical degrees of these tame automorphisms admit a spectral gap. Finally, I will apply these techniques to study random walks on this particular group.
Codimension two cycles on classifying stacks of algebraic tori
We give an example of an algebraic torus T such that the torsion subgroup of the Chow group CH^2(BT) is non-trivial. This answers a question of Blinstein and Merkurjev.
Quivers, canonical bases, and categorification
In a famous paper from 2003, Fock and Goncharov conjectured that the algebra of regular functions on a cluster variety has a canonical basis parametrized by the tropicalization of a dual cluster variety. In this talk, I will show how to construct this canonical basis in an interesting class of examples. Using ideas from the representation theory of quivers, I will construct graded vector spaces which categorify the elements of the canonical basis. These graded vector spaces are closely related to spaces of BPS states in supersymmetric quantum field theories.
Degeneration of complex manifolds to hybrid spaces and applications
I will discuss the notion of hybrid spaces introduced by Berkovich and further developed by Boucksom and Jonsson in order to understand various problems concerning degenerations of complex manifolds. Applications to complex dynamical systems will be presented.
Zinovy Reichstein
On the number of generators of a finite algebra over a ring
Let k be a field, A be a finite-dimensional k-algebra (not necessarily commutative, associative or unital), and R be a commutative ring containing k. An R-algebra B is called an R-form of A if there exists a faithfully flat ring extension S/R such that B and A become isomorphic after tensoring with S. In this talk, based on joint work with Uriya First, I will be interested in the following question: if A can be generated by n elements as a k-algebra, how many elements are required to generate B as an R-algebra? For example, if A is an n-dimensional k-algebra with trivial (zero) multiplication, then an R-form of A is the same thing as a projective R-module. Otto Forster (1964) showed that every projective R-module B can be generated by n + d elements, where d is the Krull dimension of R. Richard Swan subsequently showed that this number is optimal. I will discuss generalizations of Forster's result to other types of algebras, in particular to Azumaya algebras.
[CANCELLED] Kuan-Wen Lai
UMass Amherst
[CANCELLED] New rational cubic fourfolds arising from Cremona transformations
It is conjectured that two cubic fourfolds are birational if their associated K3 categories are equivalent. We prove this conjecture for very general cubic fourfolds containing a Veronese surface, where the birational maps are induced from a Cremona transformation. Using these birational maps, we find new rational cubic fourfolds. This is joint work with Yu-Wei Fan.
[Cancelled] Nathan Ilten
[Cancelled]
Fri 8 May 2020, 8:00am
Zoom online
An ultimate proof of Hoffmann-Totaro's conjecture
Fri 8 May 2020, 8:00am-9:00am
Event on Zoom: Seminar on Quadratic forms, linear algebraic groups and beyond
We prove the last open case of the conjecture on the possible values of the first isotropy index of an anisotropic quadratic form over a field. It was initially stated by Detlev Hoffmann for fields of characteristic not 2 and then extended to arbitrary characteristic by Burt Totaro. The initial statement was proven by the speaker in 2002. In characteristic 2, the case of a totally singular quadratic form was done by Stephen Scully in 2015 and the nonsingular case by Eric Primozic in early 2019.
Zoom, see https://mysite.science.uottawa.ca/kzaynull/QFLAG/index.html
Renzo Cavalieri
Fri 25 Sep 2020, 8:30am
https://ubc.zoom.us/j/67916711780?pwd=dlVVb3U1WXV1cGRscEJxUVpDa0JyUT09
Tropical Psi Classes
Fri 25 Sep 2020, 8:30am-10:00am
Pierrick Bousseau
Fri 2 Oct 2020, 8:30am
https://ubc.zoom.us/j/67916711780 (password:the number of lines on a generic quintic threefold)
The skein algebra of the 4-punctured sphere from curve counting
Fri 2 Oct 2020, 8:30am-10:00am
The Kauffman bracket skein algebra is a quantization of the algebra of regular functions on the SL_2 character variety of a topological surface. I will explain how to realize the skein algebra of the 4-punctured sphere as the output of a mirror symmetry construction based on higher genus Gromov-Witten invariants of a log Calabi-Yau cubic surface. This leads to a proof of a previously conjectured positivity property of the bracelets bases of the skein algebras of the 4-punctured sphere and of the 1-punctured torus.
Tony Yue Yu
Laboratoire de Mathématiques d'Orsay, Université Paris-Sud, Paris-Saclay
https://ubc.zoom.us/j/67916711780(password:the number of lines on a generic quintic threefold)
Secondary fan, theta functions and moduli of Calabi-Yau pairs
We conjecture that any connected component $Q$ of the moduli space of triples $(X,E=E_1+\dots+E_n,\Theta)$ where $X$ is a smooth projective variety, $E$ is a normal crossing anti-canonical divisor with a 0-stratum, every $E_i$ is smooth, and $\Theta$ is an ample divisor not containing any 0-stratum of $E$, is \emph{unirational}. More precisely: note that $Q$ has a natural embedding into the Kollár-Shepherd-Barron-Alexeev moduli space of stable pairs, we conjecture that its closure admits a finite cover by a complete toric variety. We construct the associated complete toric fan, generalizing the Gelfand-Kapranov-Zelevinski secondary fan for reflexive polytopes. Inspired by mirror symmetry, we speculate a synthetic construction of the universal family over this toric variety, as the Proj of a sheaf of graded algebras with a canonical basis, whose structure constants are given by counts of non-archimedean analytic disks. In the Fano case and under the assumption that the mirror contains a Zariski open torus, we construct the conjectural universal family, generalizing the families of Kapranov-Sturmfels-Zelevinski and Alexeev in the toric case. In the case of del Pezzo surfaces with an anti-canonical cycle of $(-1)$-curves, we prove the full conjecture. The reference is arXiv:2008.02299 joint with Hacking and Keel.
Yan Soibelman
Fri 16 Oct 2020, 8:30am
Exponential integrals, Holomorphic Floer theory and resurgence
Fri 16 Oct 2020, 8:30am-10:00am
Holomorphic Floer theory is a joint project with Maxim Kontsevich, which is devoted to various aspects of the Floer theory in the framework of complex symplectic manifolds.
In my talk I will consider an important special case of the general story. Exponential integrals in finite and infinite dimension can be thought of generalization of the theory of periods (i.e variations of Hodge structure). In particular, there are comparison isomorphisms between Betti and de Rham cohomology in the exponential setting. These isomorphisms are corollaries of categorical equivalences which are incarnations of our generalized Riemann-Hilbert correspondence for complex symplectic manifolds.
Furthermore, fomal series which appear e.g. in the stationary phase method or Feynman expansions (in infinite dimensions) are Borel re-summable (resurgent). If time permits I will explain the underlying mathematical structure which we call analytic wall-crossing structure. From the perspective of Holomorphic Floer theory it is related to the estimates for the number of pseudo-holomorphic discs with boundaries on two given complex Lagrangian submanifolds.
Virtual invariants of Quot schemes of surfaces
The Quot schemes of surfaces parametrizing quotients of dimension at most 1 of the trivial sheaf carry 2-term perfect obstruction theories. Several generating series of associated virtual invariants are conjectured to be given by rational functions. We show this is the case for several geometries including all smooth projective surfaces with p_g>0. This talk is based on joint work with Noah Arbesfeld, Drew Johnson, Woonam Lim and Rahul Pandharipande.
Daniel Halpern-Leistner
Derived Theta-stratifications and the D-equivalence conjecture
The D-equivalence conjecture predicts that birationally equivalent projective Calabi-Yau manifolds have equivalent derived categories of coherent sheaves. It is motivated by homological mirror symmetry, and has inspired a lot of recent work on connections between birational geometry and derived categories. In dimension 3, the conjecture is settled, but little is known in higher dimensions. I will discuss a proof of this conjecture for the class of Calab-Yau manifolds that are birationally equivalent to some moduli space of stable sheaves on a K3 surface. This is the only class for which the conjecture is known in dimension >3. The proof uses a more general structure theory for the derived category of an algebraic stack equipped with a Theta-stratification, which we apply in this case to the Harder-Narasimhan stratification of the moduli of sheaves.
Yifeng Huang
Fri 6 Nov 2020, 9:00am
Cohomology of configuration spaces of punctured varieties
Fri 6 Nov 2020, 9:00am-10:30am
Given a smooth complex variety X (not necessarily compact), consider the unordered configuration space Conf^n(X) defined as {(x_1,...,x_n)\in X^n: x_i \neq x_j for i\neq j} / S_n. The singular cohomology of Conf^n(X) has long been an active area of research. In this talk, we investigate the following phenomenon: "puncturing once more" seems to have a very predictable effect on the cohomology of configuration spaces when the variety we start with is noncompact. In specific, a formula of Napolitano determines the Betti numbers of Conf^n(X - {P}) from the Betti numbers of Conf^m(X) (m \leq n) if X is a smooth *noncompact* algebraic curve and P is a point. We present a new proof using an explicit algebraic method, which also upgrades this formula about Betti numbers into a formula about mixed Hodge numbers and generalizes this formula to certain cases where X is of higher dimension.
Fri 13 Nov 2020, 8:30am
Quantum K-theory of git quotients
Fri 13 Nov 2020, 8:30am-9:30am
(w E. Gonzalez) I will discuss a generalization of the Kirwan map to quantum K-theory, a presentation of quantum K-theory of toric varieties, and some open questions.
Verlinde/Grassmannian Correspondence
Fri 13 Nov 2020, 9:45am-10:45am
In the 90s', Witten gave a physical derivation of an isomorphism between the Verlinde algebra of GL(n) of level l and the quantum cohomology ring of the Grassmannian Gr(n,n+l). In the joint work arXiv:1811.01377 with Yongbin Ruan, we proposed a K-theoretic generalization of Witten's work by relating the GL_n Verlinde numbers to the level l quantum K-invariants of the Grassmannian Gr(n,n+l), and refer to it as the Verlinde/Grassmannian correspondence. The correspondence was formulated precisely in the aforementioned paper, and we proved the rank 2 case (n=2) there.
In this talk, I will first explain the background of this correspondence and its interpretation in physics. Then I will discuss the main idea of the proof for arbitrary rank. A new technical ingredient is the virtual nonabelian localization formula developed by Daniel Halpern-Leistner.
Permutohedral Complexes and Curves With Cyclic Action
Although the moduli space of genus-zero curves is not a toric variety, it shares an intriguing amount of the combinatorial structure that a toric variety would enjoy. In fact, by adjusting the moduli problem slightly, one finds a moduli space that is indeed toric, known as Losev-Manin space. The associated polytope is the permutohedron, which also encodes the group-theoretic structure of the symmetric group. Batyrev and Blume generalized this story by constructing a "type-B" version of Losev-Manin space, whose associated polytope is a signed permutohedron that relates to the group of signed permutations. In joint work in progress with C. Damiolini, D. Huang, S. Li, and R. Ramadas, we carry out the next stage of generalization, defining a family of moduli space of curves with Z_r action encoded by an associated "permutohedral complex" for a more general complex reflection group, which specializes when r=2 to Batyrev and Blume's moduli space.
Putting the "volume" back in volume polynomials
It is a strange and wonderful fact that Chow rings of matroids behave in many ways similarly to Chow rings of smooth projective varieties. Because of this, the Chow ring of a matroid is determined by a homogeneous polynomial called its volume polynomial, whose coefficients record the degrees of all possible top products of divisors. The use of the word "volume" is motivated by the fact that the volume polynomial of a smooth projective toric variety actually measures the volumes of certain polytopes associated to the variety. In the matroid setting, on the other hand, no such polytopes exist, and the goal of our work was to find more general polyhedral objects whose volume is measured by the volume polynomial of matroids. In this talk, I will motivate and describe these polyhedral objects. This is joint work with Anastasia Nathanson.
Junliang Shen
Fri 4 Dec 2020, 8:30am
Intersection cohomology of the moduli of of 1-dimensional sheaves and the moduli of Higgs bundles
Fri 4 Dec 2020, 8:30am-9:30am
In general, the topology of the moduli space of semistable sheaves on an algebraic variety relies heavily on the choice of the Euler characteristic of the sheaves. We show a striking phenomenon that, for the moduli of semistable sheaves on a toric del Pezzo surface (e.g. P^2) or the moduli of semistable Higgs bundles with respect to a divisor of degree > 2g-2 on a curve, the intersection cohomology of the moduli space is independent of the choice of the Euler characteristic. This confirms a conjecture of Bousseau for P^2, and proves a conjecture of Toda in the case of local toric Calabi-Yau 3-folds. In the proof, a generalized version of Ngô's support theorem plays a crucial role. Based on joint work in progress with Davesh Maulik.
Nilpotent orbits and affine Grassmannians
Fri 4 Dec 2020, 9:45am-10:45am
Nilpotent orbits are important algebraic varieties arising in a number of different areas of mathematics. In spite of their simple definition, it is an open problem to give the defining equations of nilpotent orbit closures over a field of positive characteristic. For certain nilpotent orbits, Pappas and Rapoport conjectured a simple and explicit characteristic-free answer to this problem, with applications to their work in number theory. In this talk I will discuss a proof of their conjecture. It is part of a larger program describing the defining equations of an infinite-dimensional variety called the affine Grassmannian. I will also give an overview of the basics of nilpotent orbits, a subject which doesn't require much more than a background in basic linear algebra. This is based on joint work with Dinakar Muthiah and Oded Yacobi.
Wall-crossing and differential equations
The Kontsevich-Soibelman wall-crossing formula describes the wall-crossing behavior of BPS invariants in Donaldson-Thomas theory. It can be formulated as an identity between (possibly infinite) products of automorphisms of a formal power series ring. In this talk, I will explain how these same products also appear in the exact WKB analysis of Schrödinger's equation. In this context, they describe the Stokes phenomenon for objects known as Voros symbols.
[Update events] [Seminar info] [External events] [All events] [Pre-Apr/2009 archive]
Seminar Information Pages
Complex Fluids Seminar
Differential Geometry, Mathematical Physics, PDE Seminar
Discrete Maths Seminar
Mathematical Biology Seminar
IAM seminars
IRMACS Centre (SFU) Events
PIMS seminars
[2009] [2008] [2007] [2006] [2005] [2004] [2003] [2002] [2001] [2000] [1999] [1998] [1997] [1996] [1995]
Last modified Apr. 29, 2009
|Portal |Webmail |Mymail(mobile support) |Contact Us |
|Home |People |Department |Events |Graduate Studies |Undergraduates |Research |Outreach |Computing |Jobs |
Mathematics Department at UBC
|
CommonCrawl
|
Jean-Loup Waldspurger
Jean-Loup Waldspurger (born July 2, 1953) is a French mathematician working on the Langlands program and related areas. He proved Waldspurger's theorem, the Waldspurger formula, and the local Gan–Gross–Prasad conjecture for orthogonal groups. He played a role in the proof of the fundamental lemma, reducing the conjecture to a version for Lie algebras. This formulation was ultimately proven by Ngô Bảo Châu.
Jean-Loup Waldspurger
Born (1953-07-02) July 2, 1953
France
NationalityFrench
Alma materÉcole normale supérieure
AwardsSilver Medal of CNRS
Scientific career
FieldsMathematics
Education
Waldspurger attained his doctorate at École normale supérieure in 1980, under supervision of Marie-France Vignéras.
Scientific work
J.-L. Waldspurger's work concerns the theory of automorphic forms. He highlighted the links between Fourier coefficients of modular shapes of half full weight and function values L or periods of modular shapes of full weight. With C. Moeglin, he demonstrated Jacquet's conjecture describing the discrete spectrum of the GL(n) groups.[1] Other works are devoted to orbital integrals on p-adic groups: unipotent orbital integrals, proof of the conjecture of Langlands-Shelstad transfer conditional on the "fundamental lemma" (which was later proved by Ngo-Bao-Chau[2]). J.-L. Waldspurger proved the Gross-Prasad conjecture for SO(N) groups on a p-adic field. With C. Moeglin, he wrote two large volumes establishing the stable trace formula for twisted spaces.[3]
Some recent publications are available on its website.[4]
Awards
He won the Mergier–Bourdeix Prize of the French Academy of Sciences in 1996. He was awarded the 2009 Clay Research Award for his results in p-adic harmonic analysis. He was elected as a member of French Academy of Sciences in 2017.[5]
References
1. Mœglin, C.; Waldspurger, J.-L. (1989). "Le spectre résiduel de ${\rm {GL}}(n)$". Annales scientifiques de l'École normale supérieure. Societe Mathematique de France. 22 (4): 605–674. doi:10.24033/asens.1595. ISSN 0012-9593.
2. Ngô, Bao Châu (23 April 2010). "Le lemme fondamental pour les algèbres de Lie". Publications mathématiques de l'IHÉS (in French). Springer Science and Business Media LLC. 111 (1): 1–169. doi:10.1007/s10240-010-0026-7. ISSN 0073-8301.
3. Moeglin, Colette; Waldspurger, Jean-Loup (2016). Stabilisation de la formule des traces tordue. Volume 1 (in French). Cham. ISBN 978-3-319-30049-8. OCLC 965778158.{{cite book}}: CS1 maint: location missing publisher (link)
4. "Publications".
5. "DIX-HUIT NOUVEAUX MEMBRES ÉLUS A L'ACADÉMIE DES SCIENCES" (PDF). 6 December 2017.
• Jean-Loup Waldspurger, Mathématicien (in French)
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Catalonia
• Germany
• Israel
• Belgium
• United States
• Czech Republic
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Wikipedia
|
Gelfand–Graev representation
In representation theory, a branch of mathematics, the Gelfand–Graev representation is a representation of a reductive group over a finite field introduced by Gelfand & Graev (1962), induced from a non-degenerate character of a Sylow subgroup.
The Gelfand–Graev representation is reducible and decomposes as the sum of irreducible representations, each of multiplicity at most 1. The irreducible representations occurring in the Gelfand–Graev representation are called regular representations. These are the analogues for finite groups of representations with a Whittaker model.
References
• Carter, Roger W. (1985), Finite groups of Lie type. Conjugacy classes and complex characters., Pure and Applied Mathematics (New York), New York: John Wiley & Sons, ISBN 978-0-471-90554-7, MR 0794307
• Gelfand, I. M.; Graev, M. I. (1962), "Construction of irreducible representations of simple algebraic groups over a finite field", Doklady Akademii Nauk SSSR, 147: 529–532, ISSN 0002-3264, MR 0148765 English translation in volume 2 of Gelfand's collected works.
|
Wikipedia
|
\begin{document}
\begin{center} \large{\textbf{Cayley-Dickson Process and\\ Centrally Essential Rings}} \end{center}
{\sf V.T. Markov}
Lomonosov Moscow State University
e-mail: [email protected]
{\sf A.A. Tuganbaev}
National Research University "MPEI"
Lomonosov Moscow State University
e-mail: [email protected]
{\bf Abstract.} We describe associative center $N(R)$ and the center $Z(R)$ of the ring $R$ obtained by applying the generalized Cayley-Dickson construction and we find conditions under which the ring $R$ is $N$-essential or centrally essential. The obtained results are applied to generalized quaternion rings and octonion rings; we use them to construct an example of a non-associative centrally essential ring.
V.T.Markov is supported by the Russian Foundation for Basic Research, project 17-01-00895-A. A.A.Tuganbaev is supported by Russian Scientific Foundation, project 16-11-10013.
{\bf Key words:} centrally essential ring, Cayley-Dickson process.
\section{Introduction}\label{section1} All considered rings are unital but not necessarily associative.
We denote by $N(R)$, $K(R)$, $Z(R)$ the associative center, the commutative center and the center (in the sense of \cite[$\S$7.1]{Zhevlakov}) of the ring $R$, respectively; the definitions are also given at the end of the introduction. It is clear that $N(R)$, $Z(R)$ are subrings in $R$ and the ring $R$ is a unitary (left and right) $N(R)$-module and $Z(R)$-module.
We denote by $[A,A]$ the ideal of the ring $A$ generated by commutators of all its elements.
\textbf{1.1. The definitions of a centrally essential ring and an $N$-essential ring.}\\ A ring $R$ is said to be \emph{centrally essential}\label{Z-ess} if $Z(R)r\cap Z(R)\neq 0$ for any non-zero element $r\in R$, i.e., $Z=Z(R)$ is an essential submodule of the module ${}_ZR$.\\ A ring $R$ is said to be \emph{left $N$-essential}\label{N-ess} if $N(R)r\cap N(R)\neq 0$ for any non-zero element $r\in R$, i.e., $N=N(R)$ is an essential submodule of the module ${}_NR$.
The following definition several generalizes the definition of the Cayley-Dickson process given in \cite[\S 2.2]{Zhevlakov}, see \cite{Albert}.
\textbf{1.2. The Cayley-Dickson process.}\label{pCD} Let $A$ be a ring with involution $*$ and $\alpha$ an invertible symmetrical element of the center of the ring $A$. We define a multiplication operation on the Abelian group $A\oplus A$ as follows: \begin{equation}\label{mCD} (a_1,a_2)(a_3,a_4) = (a_1a_3 +\alpha a_4a_2^*\;,\; a_1^*a_4 + a_3a_2) \end{equation} for any $a_1,\ldots,a_4\in A$. We denote the obtained ring by $(A,\alpha)$.
The elements of the ring $(A,\alpha)$ of the form $(a,0)$, $a\in A$, form a subring in ring $(A,\alpha)$ which is isomorphic to the ring $A$; we will identify them with the corresponding elements of the ring $A$. We set $\nu=(0,1)\in (A,\alpha)$. Then $a*\nu=(0,a)=\nu a$ for any $a\in A$ and $\nu^2=\alpha$. Thus, $(A,\alpha)=A+A\nu$.
The following properties are directly verified with the use of relation \eqref{mCD}.\label{elprop}
\textbf{1.2.1.} $\nu^2=\alpha$ and $\nu a=a^*\nu$ for any element $a\in A$.
\textbf{1.2.2.} $(1,0)$ is the identity element of the ring $(A,\alpha)$.
\textbf{1.2.3.} The set $\{(a,0)\,|\, a\in A\}$ is a subring of the ring $(A,\alpha)$ which is isomorphic to the ring $A$.
\textbf{1.2.4.} The mapping $(a,b)\mapsto (a^*,-b)$, $a,b\in A$, is an involution of the ring $(A,\alpha)$.
The main results of this paper are Theorems 1.3, 1.4 and 1.5.
\textbf{Theorem 1.3.}\label{main} Let $A$ be a ring with center $C=Z(A)$, $I=\mathop{\rm Ann}\nolimits_C([A,A])$, $R=(A,\alpha)$.\\ \textbf{1.3.1.} $N(R)=\{(x,y)\colon x\in C,\;y\in I\}$.\\ \textbf{1.3.2.} A ring $R$ is left (right) $N$-essential if and only if $A$ is a centrally essential ring and $I$ is an essential ideal of the ring $C$.
\textbf{Theorem 1.4.}\label{main2} Let $A$ be a ring with center $C=Z(A)$, $I=\mathop{\rm Ann}\nolimits_C([A,A])$, $B=\{a\in C\colon a=a^*\}$, $J=\mathop{\rm Ann}\nolimits_B(\{a-a^*\;:\;a\in A\})$.\\
\textbf{1.4.1.} $Z(R)=\{(x,y)\;|\;x\in B\cap C,\;y\in I\cap J\}$.\\ \textbf{1.4.2.} A ring $R$ is a centrally essential if and only if $B$ is an essential $B$-submodule of the ring $R$ and $J$ is an essential ideal of the ring $B$.
\textbf{Theorem 1.5.}\label{main3}\\ \textbf{1.5.1.} There exists a finite non-associative and non-commutative alternative centrally essential ring.\\ \textbf{1.5.2.} There exists a finite non-commutative and non-alternative centrally essential ring.
The proof of Theorems 1.3, 1.4 and 1.5 is decomposed into several assertions given in consequent sections. Some of these assertions are of independent interest.
We give some necessary definitions and notation. The \emph{associator} of three elements $a,b,c$ of the ring $R$ is the element $(a,b,c)=(ab)c-a(bc)$ and the \emph{commutator} of two elements $a,b\in R$ is the element $[a,b]=ab-ba$. For ring $R$, the \emph{associative center, commutative center and center} of $R$ are the sets $$ \begin{array}{l} N(R)=\{x\in R:\forall a,b\in R,\;(x,a,b)=(a,x,b)=(a,b,x)=0\},\\ K(R)=\{x\in R:\forall a\in R,\;[x,a]=0\},\\ Z(R)=N(R)\cap K(R), \end{array} $$ respectively
If $M$ is a left module over the ring $R$ and $S$ is a subset of the module $M$, then $\mathop{\rm Ann}\nolimits_R(S)$ denotes the annihilator of the set $S$ in the ring $R$, i.e., $\mathop{\rm Ann}\nolimits_R(S)=\{r\in R: rS=0\}$.
A ring $R$ is said to be \emph{alternative} if $(a,a,b)=(a,b,b)$ for any $a,b\in R$. By the Artin theorem \cite[Theorem 2.3.2]{Zhevlakov}, the ring $R$ is alternative if and only if any two elements of $R$ generate the associative subring.
\section{Associative Center of the Ring Obtained by Applying Cayley-Dickson Process} \label{section2} In Section 2, we fix a ring $A$ and an element $\alpha$ which satisfy the conditions of the Cayley-Dickson process from 1.2 \label{pCD}; we also set $R=(A,\alpha)$.
\textbf{Lemma 2.1.}\label{ident} An element $(x,y)\in R$ belongs to the ring $N(R)$ if and only if for any two elements $u,v\in A$, the following two relation systems hold \begin{equation}\label{id_x} \begin{array}{l} (xu)v=x(uv),(ux)v=u(xv),(uv)x=u(vx),\\ v(ux)=x(vu),(xu)v=u(vx),(vu)x=(xv)u,\\ v(xu)=(vu)x,v(ux)=(vx)u,x(uv)=u(xv),\\ (ux)v=(uv)x,v(xu)=(xv)u,x(vu)=(vx)u; \end{array} \end{equation} \begin{equation}\label{id_y} \begin{array}{l} (uy)v=y(vu),(uy)v=(yv)u,y(vu)=u(yv),\\ v(yu)=y(uv),(yu)v=(vy)u,y(uv)=(vy)u,\\ v(uy)=(uv)y,v(uy)=u(vy),(vu)y=u(vy),\\ (yu)v=(vu)y,v(yu)=u(yv),(uv)y=(yv)u. \end{array} \end{equation}
\textbf{Proof.} Let $(x,y)\in R$. Since associators are linear, \mbox{$(x,y)\in N(R)$} if and only if for any two elements $u,v\in A$, we have \begin{equation}\label{associators} \begin{array}{l} ((x,y)(u,0)(v,0))=((u,0),(x,y),(v,0))=((u,0),(v,0),(x,y))=0,\\ ((x,y)(u,0)(0,v))=((u,0),(x,y),(0,v))=((u,0),(0,v),(x,y))=0,\\ ((x,y)(0,u)(v,0))=((0,u),(x,y),(v,0))=((0,u),(v,0),(x,y))=0,\\ ((x,y)(0,u)(0,v))=((0,u),(x,y),(0,v))=((0,u),(0,v),(x,y))=0. \end{array} \end{equation} By calculating associators from \eqref{associators}, we obtain the following system consisting of 12 relations $$ \begin{array}{l} ((xu)v,v(uy))=(x(uv),(uv)y),\\%1 ((ux)v,v(u^*y))=(u(xv),u^*(vy)),\\%2 ((uv)x,(v^*u^*)y)=(u(vx),u^*(v^*y)),\\%3 ({\alpha}v(y^*u^*),(u^*x^*)v)=({\alpha}(u^*v)y^*,x^*(u^*v)),\\%4 ({\alpha}v(y^*u),(x^*u^*)v)=({\alpha}u(vy^*),u^*(x^*v)),\\%5 ({\alpha}y(v^*u),x(u^*v))=({\alpha}u(yv^*),u^*(xv)),\\%6 ({\alpha}(uy^*)v,v(x^*u))=({\alpha}(vu)y^*,x^*(vu)),\\%7 ({\alpha}(yu^*)v,v(xu))=({\alpha}(vy)u^*,(xv)u),\\%8 ({\alpha}y(u^*v^*),x(vu))=({\alpha}(v^*y)u^*,(vx)u),\\%9 ({\alpha}v(u^*x),{\alpha}(yu^*)v)=({\alpha}x(vu^*),{\alpha}(vu^*)y),\\%10 ({\alpha}v(u^*x^*),{\alpha}(uy^*)v)=({\alpha}(x^*v)u^*,{\alpha}(vy^*)u),\\%11 ({\alpha}(vu^*)x,{\alpha}(uv^*)y)=({\alpha}(xv)u^*,{\alpha}(yv^*)u). \end{array} $$ By equating components of equal elements of the ring $R$ and considering that the element $\alpha$ is invertible, we obtain the following equivalent system $$ \begin{array}{l} (xu)v=x(uv),v(uy)=(uv)y), (ux)v=u(xv),v(u^*y)=u^*(vy),\\%2 (uv)x=u(vx),(v^*u^*)y=u^*(v^*y),
v(y^*u^*)=(u^*v)y^*,(u^*x^*)v=x^*(u^*v),\\%4
v(y^*u)=u(vy^*),(x^*u^*)v=u^*(x^*v),
y(v^*u)=u(yv^*),u^*(xv)=x(u^*v),\\%6
(uy^*)v=(vu)y^*,v(x^*u)=x^*(vu),
(yu^*)v=(vy)u^*,v(xu)=(xv)u,\\%8
y(u^*v^*)=(v^*y)u^*,x(vu)=(vx)u,
v(u^*x)=x(vu^*),(yu^*)v=(vu^*)y,\\%10
v(u^*x^*)=(x^*v)u^*,(uy^*)v=(vy^*)u,
(vu^*)x=(xv)u^*,(uv^*)y=(yv^*)u. \end{array} $$ We replace the equations, both parts of which contain $x^*$ or $y^*$, by relations of conjugate elements. We note that either $u$ or $u^*$ stands in every equation. Therefore, we can put $u$ instead of $u^*$, since $A^*=A$. Similarly, we replace $v^*$ by $v$. By choosing equations containing $x$, we obtain \eqref{id_x}, the remaining equations form the system \eqref{id_y}.~
$\square$
\textbf{Lemma 2.2.}\label{x_in_Z} Let $x\in A$. The relations \eqref{id_x} hold for all $u,v\in A$ if and only if $x\in Z(A)$.
\textbf{Proof.} Let $x\in A$ and let relations~\eqref{id_x} hold for all $u,v\in A$. The first three relations mean that $x\in N(A)$. It follows from the fourth relation for $u=1$ we obtain $x\in K(A)$. Consequently, $x\in Z(A)$.
Conversely, if $x\in Z(A)$, then each of the relations in \eqref{id_x} is transformed into one of the true relations $x(uv)=x(uv)$ or $x(vu)=x(vu)$, i.e., relations \eqref{id_x} hold for all $u,v\in A$.~
$\square$
\textbf{Lemma 2.3.}\label{y_in_Ann_I} Let $y\in A$. The relations \eqref{id_y} hold for all $u,v\in A$ if and only if $y\in \mathop{\rm Ann}\nolimits_{Z(A)}([A,A])$.
\textbf{Proof.} Let $y\in A$ and let relations \eqref{id_y} hold for all $u,v\in A$. First of all, we note that for $v=1$, the first equation of \eqref{id_y} turns into equation $uy=yu$; this is equivalent to the inclusion $y\in K(A)$, since the element$u$ of $A$ is arbitrary.
We verify that $y\in N(A)$. For any two elements $u,v\in A$, we have $$ \begin{array}{l}
(yu)v\stackrel{\overline 1}{=}(vy)u\stackrel{\overline 2}{=}y(uv),\\
(uy)v\stackrel{1}{=}y(vu)\stackrel{3}{=}u(yv)\\
(uv)y=y(uv)\stackrel{4}{=}v(yu)=v(uy)\stackrel{8}{=}u(vy). \end{array} $$ In these transformations, the number over the relation sign is the number of used equation in \eqref{id_y} (equations are numbered in rows from the left to right beginning with the first row). The number underscore denotes that, instead of the given equation, we use the equivalent equation obtained by the permutation of the variables $u,v$.
Consequently, $y\in N(A)\cap K(A)=Z(A)$.
Finally, we take into account the proven to see that already the first equation of \eqref{id_y} implies that $y[u,v]=0$ for any $u,v\in A$, i.e., $y\in\mathop{\rm Ann}\nolimits_C([A,A])$.
Conversely, if $y\in \mathop{\rm Ann}\nolimits_{Z(A)}([A,A])$, then each of the relations \eqref{id_y} is transformed into the true relation $y(uv)=y(vu)$, i.e., relations \eqref{id_y} hold for all $u,v\in A$.~
$\square$
\textbf{Remark 2.4.} Theorem 1.3.1\label{main} follows from Lemma 2.1, Lemma 2.2 and Lemma 2.3.\label{ident}\label{x_in_Z}\label{y_in_Ann_I}.
\textbf{Remark 2.5.} It follows from the above classical result (cf. \cite[Exercise 2.2.2(a)]{Zhevlakov}):\label{crit}\\ A ring $R=(A,\alpha)$ is associative if and only if the ring $A$ is associative and commutative.
\section{$N$-Essentiality Criterion of the Ring Obtained by the Cayley-Dickson Process}
\textbf{Lemma 3.1.}\label{ess_in_R} Let $B$ be a subring of the center of the ring $A$ and $I$ an essential ideal of the ring $B$. If $B$ is an essential $B$-submodule of the module ${}_BA$, then $I$ is an essential $B$-submodule of the module ${}_BR$.
\textbf{Proof.} If $r$ is a non-zero element of the ring $R$, then there exists an element $b\in B$ with $0\neq br\in B$. Therefore, there exists an element $d\in B$ such that $0\neq dcr\in I$ and $Br\cap I\neq 0$.~
$\square$
\textbf{3.2. The proof of Theorem 1.3.2.} Let ring $A$ and the element $\alpha$ satisfy the conditions of 1.2 (the Cayley-Dickson process).\label{pCD} We set $C=Z(A)$ and $I=\mathop{\rm Ann}\nolimits_C([A,A])$. It is obvious that $C^*=C$, $I^*=I$, $\alpha C=C$ and $\alpha I=I$.
Let the ring $R=(A,\alpha)$ be $N$-essential. Then for any non-zero element $a\in A$ there exists an element $(x,y)\in N(R)$ such that $(x,y)(a,0)=(xa,ay)\in N(R)\setminus\{0\}$. By Theorem 1.3.1\label{main}, $x\in C$ and $y\in I$. If $xa\neq 0$, then $xa\in C\setminus\{0\}$; otherwise, $ya\in C\setminus\{0\}$. In the both cases, $Ca\cap C\neq 0$. Thus, $A$ is a centrally essential ring.
We prove that $I$ is an essential ideal of the ring $C$. Let $c\in C\setminus \{0\}$. If $Ic\neq 0$, then $Ic\subseteq I$ and $Cc\cap I\neq 0$. Let $Ic=0$. We consider element $(0,c)$. There exists an element $(x,y)\in N(R)$ such that $(x,y)(0,c)=(\alpha cy,x^*c)\in N(R)\setminus\{0\}$. Since $\alpha y\in I$, $\alpha c y=0$, we have that $x^*c\neq 0$ and $x^*c\in I$ by Remark 2.4. Consequently, $Cc\cap I\neq 0$, which is required.
Conversely, let's assume that $A$ is a centrally essential ring and $I$ is a essential ideal in $C$.
Let $(x,y)\in R\setminus\{0\}$. There exists an element $c\in C$ such that $cx\in C\setminus\{0\}$. Since $(c,0)\in N(R)$, we have $0\neq (c,0)(x,y)=(cx,c^*y)\in N(R)(x,y)$. If $c^*y=0$, then $0\neq (cx,0)\in N(R)(x,y)\cap N(R)$. If $c^*y\neq 0$, then by Lemma 3.1\label{ess_in_R} (for $B=C$) there exists an element $d\in C$ such that $dc^*y\in I\setminus \{0\}$. Then $$ (d^*,0)(c,0)(x,y)=(d^*,0)(cx,c^*y)=(d^*cx,dc^*y)\in N(R)(x,y)\cap N(R)\setminus\{0\}. $$ Thus, ring $R$ is a $N$-essential.~
$\square$
\section{Proof of Theorem 1.4}\label{main2}\label{section4} We fix the ring $A$ with center $C=Z(A)$ and element $\alpha$ which satisfy 1.2 (the Cayley-Dickson process).\label{pCD} Let $R=(A,\alpha)$, \begin{equation}\label{centerdefs} \begin{array}{l}I=\mathop{\rm Ann}\nolimits_C([A,A]),\quad B=\{a\in C:\;a=a^*\},\\ J=\mathop{\rm Ann}\nolimits_B(\{a-a^*\;:\;a\in A\}). \end{array} \end{equation} We note that the sets $B$ and $J$ are invariant with respect to the involution and are closed with respect to the multiplication by $\alpha$.
\textbf{Proposition 4.1.}\label{main2.1}
$Z(R)=\{(x,y)\,|\, x\in B,\,y\in I\cap J\}$.
\textbf{Proof.} Let $(x,y)\in Z(R)$. Since $Z(R)\subseteq N(R)$, it follows from Theorem 1.3 \label{main} that $x\in C$ and $y\in I$. The relations $(0,1)(x,y)=(x,y)(0,1)$ imply the relations $\alpha y=\alpha y^*$ and $x=x^*$. Consequently, $x\in B$ and $y\in B\cap I$. Next, the relation $(a,0)(x,y)=(x,y)(a,0)$, $a\in A$, implies the relations $ax=xa$ and $ay=a^*y$. The first relation holds for any $x\in C$ and the second relation means that $y(a-a^*)=0$, i.e., $y\in J$. Consequently, $y\in I\cap J$.
Conversely, if $x\in B$ and $y\in I\cap J$, then $(x,y)\in N(R)$ and for any $a,b\in A$ we have \begin{equation*} \begin{array}{l} (x,y)(a,b)=(xa+\alpha by^*, x^*b+ay)=(xa+\alpha yb, xb+ay),\\ (a,b)(x,y)=(ax+\alpha yb^*,a^*y+xb)=(ax+\alpha yb,xb+ay) \end{array} \end{equation*} Thus, $(x,y)\in K(R)$, whence $(x,y)\in Z(R)$.~
$\square$
\textbf{Proposition 4.2.}\label{main2.2} A ring $R=(A,\alpha)$ is centrally essential if and only if $B$ is a essential $B$-submodule of the ring $R$ and $J'=J\cap I$ is an essential ideal of the ring $B$.~
$\square$
\textbf{Proof.} Let the ring $R=(A,\alpha)$ be centrally essential. Then for any $a\in A\setminus\{0\}$, there exists an element $(x,y)\in Z(R)$ such that $(x,y)(a,0)=(xa,ay)\in Z(R)\setminus\{0\}$. By Proposition 4.1\label{2.1}, $x,xa\in B$ and $y,ay=ya\in J'$. If $xa\neq 0$, then $xa\in B\setminus\{0\}$; otherwise, $ya\in B\setminus\{0\}$. In the both cases, we have $Ba\cap B\neq 0$. Thus, $B$ is a essential submodule of the module ${}_BA$.
We prove that $J'=$ is an essential ideal of the ring $B$. Let $b\in B\setminus \{0\}$. If $J'b\neq 0$, then $J'b\subseteq J'$ and $Bb\cap J'\supseteq J'b\cap J'\neq 0$. Let $J'b=0$. We consider element $(0,b)$. There exists an element $(x,y)\in Z(R)$ such that $(x,y)(0,b)=(\alpha by,x^*b)\in Z(R)\setminus\{0\}$. Since $\alpha y\in J'$, $\alpha b y=0$, we have that $x^*b\neq 0$, $x\in B$ and $x^*b=xb\in J'$, by Proposition 4.1\label{main2.1}. Consequently, $Bb\cap J'\neq 0$, which is required.
Conversely, let's assume that $B$ is an essential $B$-submodule of the ring $R$ and $J'$ is a essential ideal of the ring $B$.
Let $(x,y)\in R\setminus\{0\}$. First, we assume that $x\neq 0$. There exists an element $b\in B$ such that $bx\in B\setminus\{0\}$. Since $(b,0)\in Z(R)$, we have $0\neq (b,0)(x,y)=(bx,b^*y)\in Z(R)(x,y)$. If $b^*y=0$, then $0\neq (bx,0)\in Z(R)(x,y)\cap Z(R)$. If $b^*y\neq 0$, then by Lemma 3.1\label{ess_in_R} (for $I=J'$) there exists an element $d\in B$ such that $db^*y\in J'\setminus \{0\}$. Then $$ (d^*,0)(b,0)(x,y)=(d^*,0)(bx,b^*y)=(d^*bx,db^*y)\in Z(R)(x,y)\cap Z(R)\setminus\{0\}. $$ Now let $x=0$. Then $y\neq 0$, and there exists an element $d\in B$ such that $dy\in J'\setminus \{0\}$. We obtain $(d^*,0)(0,b)=(0,db)\in Z(R)\setminus \{0\}$ and $(d^*,0)\in Z(R).$ Thus, the ring $R$ is centrally essential.~
$\square$
\textbf{4.3. The completion of the proof of Theorem 1.4.\label{main2}}\\ The first assertion of Theorem 1.4\label{main2} follows from Proposition 4.1\label{main2.1}.\\The second assertion of Theorem 1.4\label{main2} follows from Proposition 4.2\label{main2.2}.~
$\square$
\section{Generalized Quaternion Algebra and Octonion Algebra \\ over a Commutative Ring}\label{quat-oct}\label{section5} \label{!!!}
Let $K$ be a commutative associative ring with the identity involution and $a$ an invertible element of the ring $R$. We consider the ring $A_1=(K,a)$. Then $A_1$ is a commutative associative ring, since $B=C=I=J=K$, under the notation of Theorem 1.4\label{main2}. It is natural to write elements of the ring $A_1$ in the form $x+yi$, where $x,y$ are elements of the ring $K$, $i=(0,1)$. On the ring $A_1$, an involution is defined by the relation $(x+yi)^*=x-yi$ for any $x,y\in K$. We choose an invertible element $b\in K$. Then $b$ is an invertible symmetrical element of the center of the ring $A_1$ and we can construct the ring $A_2=(A_1,b)$. We consider the $K$-basis of the algebra $A_2$ which is formed by the elements $1=(1,0)$, $i=(i,0)$, $j=(0,1)$ and $k=(0,-i)$. The relations $i^2=a$, $j^2=b$, $ij=-ji=k$, $ik=-ki=aj$, $kj=-jk=bi$ are directly verified. Consequently, the obtained ring is the generalized quaternion algebra $(a,b,K)$ under the notation of \cite{tugan93}. It is well known (and also follows from Theorem 1.3\label{main}) that the ring $A_2$ is associative (e.g., see \cite[Example 7.2.III]{Zhevlakov}). The center of the ring $A_2$ is of the form $K+Ni+Nj+Nk$, where $N=\mathop{\rm Ann}\nolimits_K(2)$ (see \cite[Lemma 2(b)]{tugan93}). Let $B,I,J$ be defined by equations \eqref{centerdefs} for $A=A_2$. It is easy to verify that $B=C=Z(A_2)$, $I=J=N+Ni+Nj+Nk$.
\textbf{Lemma 5.1.}\label{essK} Under the above notation, the ideal $I$ is an essential ideal in $B$ if and only if $N$ is an essential ideal in $K$.
\textbf{Proof.} Let $I$ be an essential ideal in $B$. If $x\in K\setminus\{0\}$, then there exists an element $y\in B$ such that $xy\in I\setminus\{0\}$. We set $y=y_1+y_2i+y_3j+y_4k$, where $y_1\in K$ and $y_2,y_3,y_4\in N$. If $xy_1\neq 0$, then $xK\cap N\neq 0$. Otherwise, at least one of the elements $xy_2,xy_3,xy_4$ is not equal to $0$ and each of them belong to the ideal $N$, whence $xK\cap N\neq 0$ in this case too.
Conversely, if $N$ is an essential ideal in $K$ and $x=x_1+x_2i+x_3j+x_4k\in I\setminus\{0\}$, then $x_2,x_3,x_4\in N$. If $x_1\neq 0$, then there exists an element $y\in K$ with $yx_1\in N\setminus\{0\}$. Then $yx\in Bx\cap I\setminus\{0\}$. If $x_1=0$, then $x=1\cdot x\in Bx\cap I$. Thus, $I$ is a essential ideal of the ring $B$.~
$\square$
From the above argument, we obtain Proposition 5.2.
\textbf{Proposition 5.2.}\label{csQuat} The quaternion algebra $((K,a),b)$ is a non-commutative centrally essential ring if and only if $\mathop{\rm Ann}\nolimits_K(2)$ is a proper essential ideal of the ring $K$.
Now we consider an arbitrary invertible element $c\in K$ and ring $A_3=(A_2,c)$. We set $f_1=i$, $f_2=j$, $f_3=k$, $f_4=l=(0,1)$, $f_5=(0,-i)$, $f_6=(0,-j)$, $f_8=(0,-k)$. Some it can be directly verified that the basis $\{1,f_1,f_2,\ldots,f_7\}$ of the $K$-module $A_3$ satisfies the relations from \cite{gen_quat_oct} for basis elements of the generalized octonion algebra ${\mathbb O}(\alpha, \beta, \gamma)$ (for $\alpha=-a, \beta=-b, \gamma=-c$).
Similar to Proposition 5.2\label{csQuat}, we obtain Proposition 5.3.
\textbf{Proposition 5.3.}\label{csOct} The octonion algebra $(((K,a),b),c)$ is a non-associative centrally essential ring if and only if $\mathop{\rm Ann}\nolimits_K(2)$ is a proper essential ideal of the ring $K$.
\textbf{5.4. The completion of the proof of Theorem 1.5.}\\ Let $K={\mathbb Z}_4$. We prove that $R=(((K,1),1),1)$ is a non-associative non-commutative centrally essential ring.\\ Indeed, $\mathop{\rm Ann}\nolimits_K(2)=2K$ is an essential proper ideal in $K$. Therefore, the non-commutativity of the ring $((K,1),1)$ (and the non-commutativity of the ring $R$ containing $((K,1),1)$) follows from Proposition 5.2\label{csQuat} and the non-associativity of the ring $R$ follows from Proposition 5.3.\label{csOct}.
We note that the ring $R=(((K,1),1),1)$ is an alternative ring and the ring $(R,1)$ is not even a right-alternative ring, i.e., $(R,1)$ does not satisfy the identity $(x,y,y)=0$ \cite[Exercise 7.2.2]{Zhevlakov}. Thus, there exist alternative non-associative finite centrally essential rings and non-alternative finite centrally essential rings.~
$\square$
\section{Open Questions}\label{section6}
\mbox{$\,$}
\textbf{6.1.} Is it true that there exist left $N$-essential rings which are not right $N$-essential?
\textbf{6.2.} Is it true that there exist commutative $N$-essential (equivalently, centrally essential) non-associative rings?
\textbf{6.3.} Is it true that there exist right-alternative centrally essential or $N$-essential non-alternative rings?
\textbf{6.4.} How can we generalize the obtained results to the case of non-unital rings and the case, where the element $\alpha$ in definition 1.2 not supposed to be invertible?
\end{document}
|
arXiv
|
Probabilistic solutions to ordinary differential equations as nonlinear Bayesian filtering: a new perspective
Filip Tronarp1,
Hans Kersting2,
Simo Särkkä1 &
Philipp Hennig2
Statistics and Computing volume 29, pages 1297–1315 (2019)Cite this article
We formulate probabilistic numerical approximations to solutions of ordinary differential equations (ODEs) as problems in Gaussian process (GP) regression with nonlinear measurement functions. This is achieved by defining the measurement sequence to consist of the observations of the difference between the derivative of the GP and the vector field evaluated at the GP—which are all identically zero at the solution of the ODE. When the GP has a state-space representation, the problem can be reduced to a nonlinear Bayesian filtering problem and all widely used approximations to the Bayesian filtering and smoothing problems become applicable. Furthermore, all previous GP-based ODE solvers that are formulated in terms of generating synthetic measurements of the gradient field come out as specific approximations. Based on the nonlinear Bayesian filtering problem posed in this paper, we develop novel Gaussian solvers for which we establish favourable stability properties. Additionally, non-Gaussian approximations to the filtering problem are derived by the particle filter approach. The resulting solvers are compared with other probabilistic solvers in illustrative experiments.
Avoid the common mistakes
We consider an initial value problem (IVP), that is, an ordinary differential equation (ODE)
$$\begin{aligned} {\dot{y}}(t) = f\left( y(t),t\right) ,\ \forall t \in [0,T], \quad y(0) = y_0 \in {\mathbb {R}}^d, \end{aligned}$$
with initial value \(y_0\) and vector field \(f:{\mathbb {R}}^d \times {\mathbb {R}}_+ \rightarrow {\mathbb {R}}^d\). Numerical solvers for IVPs approximate \(y:[0,T] \rightarrow {\mathbb {R}}^d\) and are of paramount importance in almost all areas of science and engineering. Extensive knowledge about this topic has been accumulated in numerical analysis literature, for example, in Hairer et al. (1987), Deuflhard and Bornemann (2002) and Butcher (2008). However, until recently, a probabilistic quantification of the inevitable uncertainty—for all but the most trivial ODEs—from the numerical error over their outputs has been omitted.
Moreover, ODEs are often part of a pipeline surrounded by preceding and subsequent computations, which are themselves corrupted by uncertainty from model misspecification, measurement noise, approximate inference or, again, numerical inaccuracy (Kennedy and O'Hagan 2002). In particular, ODEs are often integrated using estimates of its parameters rather than the correct ones. See Zhang et al. (2018) and Chen et al. (2018) for recent examples of such computational chains involving ODEs. The field of probabilistic numerics (PN) (Hennig et al. 2015) seeks to overcome this ignorance of numerical uncertainty and the resulting overconfidence by providing probabilistic numerical methods. These solvers quantify numerical errors probabilistically and add them to uncertainty from other sources. Thereby, they can take decisions in a more uncertainty-aware and uncertainty-robust manner (Paul et al. 2018).
In the case of ODEs, one family of probabilistic solvers (Skilling 1992; Hennig and Hauberg 2014; Schober et al. 2014) first treated IVPs as Gaussian process (GP) regression (Rasmussen and Williams 2006, Chapter 2). Then, Kersting and Hennig (2016) and Schober et al. (2019) sped up these methods by regarding them as stochastic filtering problems (Øksendal 2003). These completely deterministic filtering methods converge to the true solution with high polynomial rates (Kersting et al. 2018). In their methods, data for the 'Bayesian update' is constructed by evaluating the vector field f under the GP predictive mean of y(t) and linked to the model with a Gaussian likelihood (Schober et al. 2019, Section 2.3). See also Wang et al. (2018, Section 1.2) for alternative likelihood models. This conception of data implies that it is the output of the adopted inference procedure. More specifically, one can show that with everything else being equal, two different priors may end up operating on different measurement sequences. Such a coupling between prior and measurements is not standard in statistical problem formulations, as acknowledged in Schober et al. (2019, Section 2.2). It makes the model and the subsequent inference difficult to interpret. For example, it is not clear how to do Bayesian model comparisons (Cockayne et al. 2019, Section 2.4) when two different priors necessarily operate on two different data sets for the same inference task.
Instead of formulating the solution of Eq. (1) as a Bayesian GP regression problem, another line of work on probabilistic solvers for ODEs comprising the methods from Chkrebtii et al. (2016), Conrad et al. (2017), Teymur et al. (2016), Lie et al. (2019), Abdulle and Garegnani (2018) and Teymur et al. (2018) aims to represent the uncertainty arising from the discretisation error by a set of samples. While multiplying the computational cost of classical solvers with the amount of samples, these methods can capture arbitrary (non-Gaussian) distributions over the solutions and can reduce overconfidence in inverse problems for ODEs—as demonstrated in Conrad et al. (2017, Section 3.2.), Abdulle and Garegnani (2018, Section 7) and Teymur et al. (2018). These solvers can be considered as more expensive, but statistically more expressive. This paper contributes a particle filter as a sampling-based filtering method at the intersection of both lines of work, providing a previously missing link.
The contributions of this paper are the following: Firstly, we circumvent the issue of generating synthetic data, by recasting solutions of ODEs in terms of nonlinear Bayesian filtering problems in a well defined state-space model. For any fixed-time discretisation, the measurement sequence and likelihood are also fixed. That is, we avoid the coupling of prior and measurement sequence, that is for example present in Schober et al. (2019). This enables application of all Bayesian filtering and smoothing techniques to ODEs as described, for example, in Särkkä (2013). Secondly, we show how the application of certain inference techniques recovers the previous filtering-based methods. Thirdly, we discuss novel algorithms giving rise to both Gaussian and non-Gaussian solvers.
Fourthly, we establish a stability result for the novel Gaussian solvers. Fifthly, we discuss practical methods for uncertainty calibration, and in the case of Gaussian solvers, we give explicit expressions. Finally, we present some illustrative experiments demonstrating that these methods are practically useful both for fast inference of the unique solution of an ODE as well as for representing multi-modal distributions of trajectories.
Bayesian inference for initial value problems
Formulating an approximation of the solution to Eq. (1) at a discrete set of points \(\{t_n\}_{n=0}^N\) as a problem of Bayesian inference requires, as always, three things: a prior measure, data, and a likelihood, which define a posterior measure through Bayes' rule.
We start with examining a continuous-time formulation in Sect. 2.1, where Bayesian conditioning should, in the ideal case, give a Dirac measure at the true solution of Eq. (1) as the posterior. This has two issues: (1) conditioning on the entire gradient field is not feasible on a computer in finite time and (2) the conditioning operation itself is intractable. Issue (1) is present in classical Bayesian quadrature (Briol et al. 2019) as well. Limited computational resources imply that only a finite number of evaluations of the integrand can be used. Issue (2) turns, what is linear GP regression in Bayesian quadrature, into nonlinear GP regression. While this is unfortunate, it appears reasonable that something should be lost as the inference problem is more complex.
With this in mind, a discrete-time nonlinear Bayesian filtering problem is posed in Sect. 2.2, which targets the solution of Eq. (1) at a discrete set of points.
A continuous-time model
Like previous works mentioned in Sect. 1, we consider priors given by a GP
$$\begin{aligned} X(t) \sim {\mathrm {GP}}\left( {\bar{x}},k\right) , \end{aligned}$$
where \({\bar{x}}(t)\) is the mean function and \(k(t,t')\) is the covariance function. The vector X(t) is given by
$$\begin{aligned} X(t) = \begin{bmatrix}\big (X^{(1)}(t)\big )^{\mathsf {T}}, \dots ,\big (X^{(q+1)}(t)\big )^{\mathsf {T}}\end{bmatrix}^{\mathsf {T}}, \end{aligned}$$
where \(X^{(1)}(t)\) and \(X^{(2)}(t)\) model y(t) and \({\dot{y}}(t)\), respectively. The remaining \(q-1\) sub-vectors in X(t) can be used to model higher-order derivatives of y(t) as done by Schober et al. (2019) and Kersting and Hennig (2016). We define such priors by a stochastic differential equation (Øksendal 2003), that is,
$$\begin{aligned} X(0)&\sim {\mathcal {N}}\big (\mu ^-(0),\varSigma ^-(0) \big ), \end{aligned}$$
$$\begin{aligned} {\mathrm{d}}X(t)&= \big [F X(t) + u\big ] {\mathrm{d}}t + L {\mathrm{d}}B(t), \end{aligned}$$
(3b)
where F is a state transition matrix, u is a forcing term, L is a diffusion matrix, and B(t) is a vector of standard Wiener processes.
Note that for \(X^{(2)}(t)\) to be the derivative of \(X^{(1)}\), F, u, and L are such that
$$\begin{aligned} {\mathrm{d}}X^{(1)}(t) = X^{(2)}(t) {\mathrm{d}}t. \end{aligned}$$
The use of an SDE—instead of a generic GP prior—is computationally advantageous because it restricts the priors to Markov processes due to Øsendal (2003, Theorem 7.1.2). This allows for inference with linear time-complexity in N, while the time-complexity is \(N^3\) for GP priors in general (Hartikainen and Särkkä 2010).
Inference requires data and an associated likelihood. Previous authors, such as Schober et al. (2019) and Chkrebtii et al. (2016), put forth the view of the prior measure defining an inference agent, which cycles through extrapolating, generating measurements of the vector field, and updating. Here we argue that there is no need for generating measurements, since re-writing Eq. (1) yields the requirement
$$\begin{aligned} {\dot{y}}(t) - f(y(t),t) = 0. \end{aligned}$$
This suggests that a measurement relating the prior defined by Eq. (3) to the solution of Eq. (1) ought to be defined as
$$\begin{aligned} Z(t) = X^{(2)}(t) - f(X^{(1)}(t),t). \end{aligned}$$
While conditioning the process X(t) on the event \(Z(t) = 0\) for all \(t \in [0,T]\) can be formalised using the concept of disintegration (Cockayne et al. 2019), it is intractable in general and thus impractical for computer implementation. Therefore, we formulate a discrete-time inference problem in the sequel.
A discrete-time model
In order to make the inference problem tractable, we only attempt to condition the process X(t) on \(Z(t) = z(t) \triangleq 0\) at a set of discrete time points, \(\{t_n\}_{n=0}^N\). We consider a uniform grid, \(t_{n+1} = t_n + h\), though extending the present methods to non-uniform grids can be done as described in Schober et al. (2019). In the sequel, we will denote a function evaluated at \(t_n\) by subscript n, for example \(z_n = z(t_n)\). From Eq. (3), an equivalent discrete-time system can be obtained (Grewal and Andrews 2001, Chapter 3.7.3).Footnote 1 The inference problem becomes
$$\begin{aligned} X_0&\sim {\mathcal {N}}(\mu ^F_0,\varSigma ^F_0), \end{aligned}$$
$$\begin{aligned} X_{n+1} \mid X_n&\sim {\mathcal {N}}\big (A(h)X_n+\xi (h), Q(h)\big ), \end{aligned}$$
$$\begin{aligned} Z_n \mid X_n&\sim {\mathcal {N}}\big ({\dot{C}}X_n - f(C X_n,t_n),R\big ), \end{aligned}$$
$$\begin{aligned} z_n&\triangleq 0,\quad n = 1,\dots ,N, \end{aligned}$$
(7d)
where \(z_n\) is the realisation of \(Z_n\). The parameters A(h), \(\xi (h)\), and Q(h) are given by
$$\begin{aligned} A(h)&= \exp (Fh), \end{aligned}$$
$$\begin{aligned} \xi (h)&= \int _0^h \exp (F(h-\tau )) u {\mathrm{d}}\tau , \end{aligned}$$
$$\begin{aligned} Q(h)&= \int _0^h \exp (F(h-\tau )) L L^{\mathsf {T}}\exp (F^{\mathsf {T}}(h-\tau )) {\mathrm{d}}\tau . \end{aligned}$$
Furthermore, \(C = [\mathrm {I} \ 0 \ \ldots \ 0]\) and \({\dot{C}} = [ 0 \ \mathrm {I} \ 0 \ \ldots \ 0]\). That is, \(CX_n = X_n^{(1)}\) and \({\dot{C}}X_n = X_n^{(2)}\). A measurement variance, R, has been added to \(Z(t_n)\) for greater generality, which simplifies the construction of particle filter algorithms. The likelihood model in Eq. (7c) has previously been used in the gradient matching approach to inverse problems to avoid explicit numerical integration of the ODE (see, e.g. Calderhead et al. 2009).
The inference problem posed in Eq. (7) is a standard problem in nonlinear GP regression (Rasmussen and Williams 2006), also known as Bayesian filtering and smoothing in stochastic signal processing (Särkkä 2013). Furthermore, it reduces to Bayesian quadrature when the vector field does not depend on y. This is Proposition 1.
Proposition 1
Let \(X^{(1)}_0 = 0\), \(f(y(t),t) = g(t)\), \(y(0) = 0\), and \(R = 0\). Then the posteriors of \(\{X^{(1)}_n\}_{n=1}^N\) are Bayesian quadrature approximations for
$$\begin{aligned} \int _0^{nh} g(\tau ) {\mathrm{d}}\tau ,\quad n=1,\dots ,N. \end{aligned}$$
A proof of Proposition 1 is given in "Appendix A".
The Bayesian quadrature method described in Proposition 1 conditions on function evaluations outside the domain of integration for \(n < N\). This corresponds to the smoothing equations associated with Eq. (7). If the integral on the domain [0, nh] is only conditioned on evaluations of g inside the domain, then the filtering estimates associated with Eq. (7) are obtained.
Gaussian filtering
The inference problem posed in Eq. (7) is a standard problem in statistical signal processing and machine learning, and the solution is often approximated by Gaussian filters and smoothers (Särkkä 2013). Let us define \(z_{1:n} = \{ z_l \}_{l=1}^n\) and the following conditional moments
$$\begin{aligned} \mu _n^F&\triangleq {\mathbb {E}}[X_n \mid z_{1:n}], \end{aligned}$$
(10a)
$$\begin{aligned} \varSigma _n^F&\triangleq {\mathbb {V}}[X_n \mid z_{1:n}], \end{aligned}$$
(10b)
$$\begin{aligned} \mu _n^P&\triangleq {\mathbb {E}}[X_n \mid z_{1:n-1}], \end{aligned}$$
(10c)
$$\begin{aligned} \varSigma _n^P&\triangleq {\mathbb {V}}[X_n \mid z_{1:n-1}], \end{aligned}$$
(10d)
where \({\mathbb {E}}[\cdot \mid z_{1:n}]\) and \({\mathbb {V}}[ \cdot \mid z_{1:n}]\) are the conditional mean and covariance operators given the measurements \(Z_{1:n} = z_{1:n}\). Additionally, \({\mathbb {E}}[ \cdot \mid z_{1:0}] = {\mathbb {E}}[ \cdot ]\) and \({\mathbb {V}}[ \cdot \mid z_{1:0}] = {\mathbb {V}}[ \cdot ]\) by convention. Furthermore, \(\mu _n^F\) and \(\varSigma _n^F\) are referred to as the filtering mean and covariance, respectively. Similarly, \(\mu _n^P\) and \(\varSigma _n^P\) are referred to as the predictive mean and covariance, respectively. In Gaussian filtering, the following relationships hold between \(\mu _n^F\) and \(\varSigma _n^F\), and \(\mu _{n+1}^P\) and \(\varSigma _{n+1}^P\):
$$\begin{aligned} \mu _{n+1}^P&= A(h) \mu _n^F + \xi (h), \end{aligned}$$
$$\begin{aligned} \varSigma _{n+1}^P&= A(h) \varSigma _n^F A^{\mathsf {T}}(h) + Q(h), \end{aligned}$$
which are the prediction equations (Särkkä 2013, Eq. 6.6). The update equations, relating the predictive moments \(\mu _{n}^P\) and \(\varSigma _{n}^P\) with the filter estimate, \(\mu _{n}^F\), and its covariance \(\varSigma _{n}^F\), are given by (Särkkä 2013, Eq. 6.7)
$$\begin{aligned} S_n&= {\mathbb {V}}\big [{\dot{C}}X_n - f(CX_n,t_n)\mid z_{1:n-1}\big ] + R, \end{aligned}$$
$$\begin{aligned} K_n&= {\mathbb {C}}\big [X_n,{\dot{C}}X_n - f(CX_n,t_n)\mid z_{1:n-1}\big ] S_n^{-1}, \end{aligned}$$
$$\begin{aligned} {\hat{z}}_n&= {\mathbb {E}}\big [{\dot{C}}X_n - f(CX_n,t_n)\mid z_{1:n-1}\big ], \end{aligned}$$
$$\begin{aligned} \mu _n^F&= \mu _n^P + K_n(z_n - {\hat{z}}_n), \end{aligned}$$
$$\begin{aligned} \varSigma _n^F&= \varSigma _n^P - K_n S_n K_n^{\mathsf {T}}, \end{aligned}$$
(12e)
where the expectation (\({\mathbb {E}}\)), covariance (\({\mathbb {V}}\)) and cross-covariance (\({\mathbb {C}}\)) operators are with respect to \(X_n \sim {\mathcal {N}}(\mu _n^P,\varSigma _n^P)\). Evaluating these moments is intractable in general, though various approximation schemes exist in literature. Some standard approximation methods shall be examined below. In particular, the methods of Schober et al. (2019) and Kersting and Hennig (2016) come out as particular approximations to Eq. (12).
Taylor series methods
A classical method in filtering literature to deal with nonlinear measurements of the form in Eq. (7) is to make a first-order Taylor series expansion, thus turning the problem into a standard update in linear filtering. However, before going through the details of this it is instructive to interpret the method of Schober et al. (2019) as an even simpler Taylor series method. This is Proposition 2.
Let \(R = 0\) and approximate \(f(C X_n,t_n)\) by its zeroth order Taylor expansion in \(X_n\) around the point \(\mu _n^P\)
$$\begin{aligned} f\big (CX_n,t_n\big ) \approx f\big (C\mu _n^P,t_n\big ). \end{aligned}$$
Then, the approximate posterior moments are given by
$$\begin{aligned} S_n&\approx {\dot{C}} \varSigma _n^P {\dot{C}}^{\mathsf {T}}+ R, \end{aligned}$$
$$\begin{aligned} K_n&\approx \varSigma _n^P {\dot{C}}^{\mathsf {T}}S_n^{-1}, \end{aligned}$$
$$\begin{aligned} {\hat{z}}_n&\approx {\dot{C}} \mu _n^P - f\big (C\mu _n^P,t_n\big ), \end{aligned}$$
$$\begin{aligned} \mu _n^F&\approx \mu _n^P + K_n(z_n - {\hat{z}}_n), \end{aligned}$$
$$\begin{aligned} \varSigma _n^F&\approx \varSigma _n^P - K_n S_n K_n^{\mathsf {T}}, \end{aligned}$$
which is precisely the update by Schober et al. (2019).
A first-order approximation The approximation in Eq. (14) can be refined by using a first-order approximation, which is known as the extended Kalman filter (EKF) in signal processing literature (Särkkä 2013, Algorithm 5.4). That is,
$$\begin{aligned} f\big (CX_n,t_n\big )&\approx f\big (C\mu _n^P,t_n\big ) \nonumber \\&\quad + J_f\big (C\mu _n^P,t_n\big )C\big (X_n - \mu _n^P\big ), \end{aligned}$$
where \(J_f\) is the Jacobian of \(y \rightarrow f(y,t)\). The filter update is then
$$\begin{aligned} {\tilde{C}}_n&= {\dot{C}} - J_f\big (C\mu _n^P,t_n\big )C, \end{aligned}$$
$$\begin{aligned} S_n&\approx {\tilde{C}}_n \varSigma _n^P{\tilde{C}}_n^{\mathsf {T}}+ R, \end{aligned}$$
$$\begin{aligned} K_n&\approx \varSigma _n^P {\tilde{C}}_n^{\mathsf {T}}S_n^{-1}, \end{aligned}$$
$$\begin{aligned} \varSigma _n^F&\approx \varSigma _n^P - K_n S_n K_n^{\mathsf {T}}. \end{aligned}$$
(16f)
Hence the extended Kalman filter computes the residual, \(z_n - {\hat{z}}_n\), in the same manner as Schober et al. (2019). However, as the filter gain, \(K_n\), now depends on evaluations of the Jacobian, the resulting probabilistic ODE solver is different in general.
While Jacobians of the vector field are seldom exploited in ODE solvers, they play a central role in Rosenbrock methods, (Rosenbrock 1963; Hochbruck et al. 2009). The Jacobian of the vector field was also recently used by Teymur et al. (2018) for developing a probabilistic solver.
Although the extended Kalman filter goes as far back as the 1960s (Jazwinski 1970), the update in Eq. (16) results in a probabilistic method for estimating the solution of (1) that appears to be novel. Indeed, to the best of the authors' knowledge, the only Gaussian filtering-based solvers that have appeared so far are those by Kersting and Hennig (2016), Magnani et al. (2017) and Schober et al. (2019).
Numerical quadrature
Another method to approximate the quantities in Eq. (12) is by quadrature, which consists of a set of nodes \(\{{\mathcal {X}}_{n,j}\}_{j=1}^J\) with weights \(\{w_{n,j}\}_{j=1}^J\) that are associated to the distribution \({\mathcal {N}}(\mu _n^P,\varSigma _n^P)\). These nodes and weights can either be constructed to integrate polynomials up to some order exactly (see, e.g. McNamee and Stenger 1967; Golub and Welsch 1969) or by Bayesian quadrature (Briol et al. 2019). In either case, the expectation of a function \(\psi (X_n)\) is approximated by
$$\begin{aligned} {\mathbb {E}}[\psi (X_n)] \approx \sum _{j=1}^J w_{n,j} \psi ({\mathcal {X}}_{n,j}). \end{aligned}$$
Therefore, by appropriate choices of \(\psi \) the quantities in Eq. (12) can be approximated. We shall refer to filters using a third degree fully symmetric rule (McNamee and Stenger 1967) as unscented Kalman filters (UKF), which is the name that was adopted when it was first introduced to the signal processing community (Julier et al. 2000). For a suitable cross-covariance assumption and a particular choice of quadrature, the method of Kersting and Hennig (2016) is retrieved. This is Proposition 3.
Let \(\{{\mathcal {X}}_{n,j}\}_{j=1}^J\) and \(\{w_{n,j}\}_{j=1}^J\) be the nodes and weights, corresponding to a Bayesian quadrature rule with respect to \({\mathcal {N}}(\mu _n^P,\varSigma _n^P)\). Furthermore, assume \(R = 0\) and that the cross-covariance between \({\dot{C}}X_n\) and \(f(C X_n,t_n)\) is approximated as zero,
$$\begin{aligned} {\mathbb {C}}\big [{\dot{C}}X_n,f(CX_n,t_n)\mid z_{1:n-1}\big ] \approx 0. \end{aligned}$$
Then the probabilistic solver proposed in Kersting and Hennig (2016) is a Bayesian quadrature approximation to Eq. (12).
A proof of Proposition 3 is given in "Appendix B".
While a cross-covariance assumption of Proposition 3 reproduces the method of Kersting and Hennig (2016), Bayesian quadrature approximations have previously been used for Gaussian filtering in signal processing applications by Prüher and Šimandl (2015), which in this context gives a new solver.
Affine vector fields
It is instructive to examine the particular case when the vector field in Eq. (1) is affine. That is,
$$\begin{aligned} f(y(t),t) = \varLambda (t) y(t) + \zeta (t). \end{aligned}$$
In such a case, Eq. (7) becomes a linear Gaussian system, which is solved exactly by a Kalman filter. The equations for implementing this Kalman filter are precisely Eq. (11) and Eq. (12), although the latter set of equations can be simplified. Define \(H_n = {\dot{C}} - \varLambda (t_n) C\), then the update equations become
$$\begin{aligned} S_n&= H_n \varSigma _n^P H_n^{\mathsf {T}}+ R, \end{aligned}$$
$$\begin{aligned} K_n&= \varSigma _n^P H_n^{\mathsf {T}}S_n^{-1}, \end{aligned}$$
$$\begin{aligned} \mu _n^F&= \mu _n^P + K_n \big (\zeta (t_n) - H_n \mu _n^P \big )\end{aligned}$$
$$\begin{aligned} \varSigma _n^F&= \varSigma _n^P - K_n S_n K_n^{\mathsf {T}}. \end{aligned}$$
Lemma 1
Consider the inference problem in Eq. (7) with an affine vector field as given in Eq. (19). Then the EKF reduces to the exact Kalman filter, which uses the update in Eq. (20). Furthermore, the same holds for Gaussian filters using a quadrature approximation to Eq. (12), provided that it integrates polynomials correctly up to second order with respect to the distribution \({\mathcal {N}}(\mu _n^P,\varSigma _n^P)\).
Since the Kalman filter, the EKF, and the quadrature approach all use Eq. (11) for prediction, it is sufficient to make sure that the EKF and the quadrature approximation compute Eq. (12) exactly, just as the Kalman filter. Now the EKF approximates the vector field by an affine function for which it computes the moments in Eq. (12) exactly. Since this affine approximation is formed by a truncated Taylor series, it is exact for affine functions and the statement pertaining to the EKF holds. Furthermore, the Gaussian integrals in Eq. (12) are polynomials of degree at most two for affine vector fields and are therefore computed exactly by the quadrature rule by assumption. \(\square \)
Particle filtering
The Gaussian filtering methods from Sect. 2.3 may often suffice. However, there are cases where more sophisticated inference methods may be preferable, for instance, when the posterior becomes multi-modal due to chaotic behaviour or 'numerical bifurcations'. That is, when it is numerically unknown whether the true solution is above or below a certain threshold that determines the limit behaviour of its trajectory. While sampling-based probabilistic solvers such as those of Chkrebtii et al. (2016), Conrad et al. (2017), Teymur et al. (2016), Lie et al. (2019), Abdulle and Garegnani (2018) and Teymur et al. (2018) can pick up such phenomena, the Gaussian filtering-based ODE solvers discussed in Sect. 2.3 cannot. However, this limitation may be overcome by approximating the filtering distribution of the inference problem in Eq. (7) with particle filters that are based on a sequential formulation of importance sampling (Doucet et al. 2001).
A particle filter operates on a set of particles, \(\{X_{n,j}\}_{j=1}^J\), a set of positive weights \(\{w_{n,j}\}_{j=1}^J\) associated to the particles that sum to one and an importance density, \(g(x_{n+1}\mid x_n, z_n)\). The particle filter then cycles through three steps (1) propagation, (2) re-weighting, and (3) re-sampling (Särkkä 2013, Chapter 7.4).
The propagation step involves sampling particles at time \(n+1\) from the importance density:
$$\begin{aligned} X_{n+1,j} \sim g(x_{n+1}\mid X_{n,j}, z_n). \end{aligned}$$
The re-weighting of the particles is done by a likelihood ratio with the product of the measurement density and the transition density of Eq. (7), and the importance density. That is, the updated weights are given by
$$\begin{aligned} \rho (x_{n+1},x_n)&= \frac{p(z_{n+1}\mid x_{n+1}) p(x_{n+1}\mid x_{n}) }{g(x_{n+1}\mid x_{n},z_{n+1})}, \end{aligned}$$
$$\begin{aligned} w_{n+1,j}&\propto \rho \big (X_{n+1,j},X_{n,j}\big ) w_{n,j}, \end{aligned}$$
where the proportionality sign indicates that the weights need to be normalised to sum to one after they have been updated according to Eq. (22). The weight update is then followed by an optional re-sampling step (Särkkä 2013, Chapter 7.4). While not re-sampling in principle yields a valid algorithm, it becomes necessary in order to avoid the degeneracy problem for long time series (Doucet et al. 2001, Chapter 1.3). The efficiency of particle filters depends on the choice of importance density. In terms of variance, the locally optimal importance density is given by (Doucet et al. 2001)
$$\begin{aligned} g(x_n\mid x_{n-1},z_{n}) \propto p(z_n\mid x_n)p(x_n\mid x_{n-1}). \end{aligned}$$
While Eq. (23) is almost as intractable as the full filtering distribution, the Gaussian filtering methods from Sect. 2.3 can be used to make a good approximation. For instance, the approximation to the optimal importance density using Eq. (14) is given by
$$\begin{aligned} S_n&= {\dot{C}} Q(h){\dot{C}}^{\mathsf {T}}+ R, \end{aligned}$$
$$\begin{aligned} K_n&= Q(h) {\dot{C}}^{\mathsf {T}}S_n^{-1}, \end{aligned}$$
$$\begin{aligned} {\hat{z}}_n&= {\dot{C}} A(h)x_{n-1} - f\big (CA(h)x_{n-1},t_n\big ), \end{aligned}$$
$$\begin{aligned} \mu _n&= A(h)x_{n-1} + K_n(z_n - {\hat{z}}_n), \end{aligned}$$
$$\begin{aligned} \varSigma _n&= Q(h) - K_n S_n K_n^{\mathsf {T}}, \end{aligned}$$
$$\begin{aligned}&g(x_n\mid x_{n-1},z_n) = {\mathcal {N}}(x_n;\mu _n,\varSigma _n). \end{aligned}$$
An importance density can be similarly constructed from Eq. (16), resulting in:
$$\begin{aligned} {\tilde{C}}_n&= {\dot{C}} - J_f\big (CA(h)x_{n-1} ,t_n\big )C, \end{aligned}$$
$$\begin{aligned} S_n&= {\tilde{C}}_n Q(h) {\tilde{C}}_n^{\mathsf {T}}+ R, \end{aligned}$$
$$\begin{aligned} K_n&= Q(h) {\tilde{C}}_n^{\mathsf {T}}S_n^{-1}, \end{aligned}$$
$$\begin{aligned} {\hat{z}}_n&= {\dot{C}} A(h)x_{n-1} - f\big (CA(h)x_{n-1} ,t_n\big ), \end{aligned}$$
(25g)
Note that we have assumed \(\xi (h) = 0\) in Eqs. (24) and (25), which can be extended to \(\xi (h) \ne 0\) by replacing \(A(h)x_{n-1}\) with \(A(h)x_{n-1} + \xi (h)\). We refer the reader to Doucet et al. (2000, Section II.D.2) for a more thorough discussion on the use of local linearisation methods to construct importance densities.
We conclude this section with a brief discussion on the convergence of particle filters. The following theorem is given by Crisan and Doucet (2002).
Theorem 1
Let \(\rho (x_{n+1},x_n)\) in Eq. (22a) be bounded from above and denote the true filtering measure associated with Eq. (7) at time n by \(p_n^R\), and let \({\hat{p}}_n^{R,J}\) be its particle approximation using J particles with importance density \(g(x_{n+1}\mid x_{n},z_{n+1})\). Then, for all \(n \in {\mathbb {N}}_0\), there exists a constant \(c_n\) independent of J such that for any bounded Borel function \(\phi :{\mathbb {R}}^{d(q+1)} \rightarrow {\mathbb {R}}\) the following bound holds
$$\begin{aligned} {\mathbb {E}}_{\mathrm {MC}}[(\langle {\hat{p}}_n^{R,J},\phi \rangle - \langle p_n^R,\phi \rangle )^2]^{1/2} \le c_n J^{-1/2} \mathinner { \!\left||\phi \right|| }, \end{aligned}$$
where \(\langle p, \phi \rangle \) denotes \(\phi \) integrated with respect to p and \({\mathbb {E}}_{\mathrm {MC}}\) denotes the expectation over realisations of the particle method, and \( \mathinner { \!\left||\cdot \right|| }\) is the supremum norm.
Theorem 1 shows that we can decrease the distance (in the weak sense) between \({\hat{p}}_n^{R,J}\) and \(p_n^R\) by increasing J. However, the object we want to approximate is \(p_n^0\) (the exact filtering measure associated with Eq. (7) for \(R=0\)) but setting \(R = 0\) makes the likelihood ratio in Eq. (22a) ill-defined for the proposal distributions in Eqs. (24) and (25). This is because, when \(R=0\), then \(p(z_{n+1}\mid x_{n+1})p(x_{n+1}\mid x_n)\) has its support on the surface \({\dot{C}}x_{n+1} = f(Cx_{n+1},t_{n+1})\) while Eqs. (24) or (25) imply that the variance of \({\dot{C}}X_{n+1}\) or \({\tilde{C}}_{n+1}X_{n+1}\) will be zero with respect to \(g(x_{n+1}\mid x_n, z_{n+1})\), respectively. That is, \(g(x_{n+1}\mid x_n, z_{n+1})\) is supported on a hyperplane. It follows that the null-sets of \(g(x_{n+1}\mid x_n, z_{n+1})\) are not necessarily null-sets of \(p(z_{n+1}\mid x_{n+1})p(x_{n+1}\mid x_n)\) and the likelihood ratio in Eq. (22a) can therefore be undefined. However, a straightforward application of the triangle inequality together with Theorem 1 gives
$$\begin{aligned}&{\mathbb {E}}_{\mathrm {MC}}[(\langle {\hat{p}}_n^{R,J},\phi \rangle - \langle p_n^0,\phi \rangle )^2]^{1/2} \nonumber \\&\quad \le {\mathbb {E}}_{\mathrm {MC}}[(\langle {\hat{p}}_n^{R,J},\phi \rangle - \langle p_n^R,\phi \rangle )^2]^{1/2} \nonumber \\&\qquad + {\mathbb {E}}_{\mathrm {MC}}[(\langle p_n^{R},\phi \rangle - \langle p_n^0,\phi \rangle )^2]^{1/2} \nonumber \\&\quad = {\mathbb {E}}_{\mathrm {MC}}[(\langle {\hat{p}}_n^{R,J},\phi \rangle - \langle p_n^R,\phi \rangle )^2]^{1/2} \nonumber \\&\qquad + \big |\langle p_n^{R},\phi \rangle - \langle p_n^0,\phi \rangle \big |\nonumber \\&\quad \le c_n J^{-1/2} \mathinner { \!\left||\phi \right|| } + \big |\langle p_n^R,\phi \rangle -\langle p_n^0,\phi \rangle \big |. \end{aligned}$$
The last term vanishes as \(R \rightarrow 0\). That is, the error can be controlled by increasing the number of particles J and decreasing R. Though a word of caution is appropriate, as particle filters can become ill-behaved in practice if the likelihoods are too narrow (too small R). However, this also depends on the quality of the proposal distribution.
Lastly, while Theorem 1 is only valid if \(\rho (x_{n+1},x_n)\) is bounded, this can be ensured by either inflating the covariance of the proposal distribution or replacing the Gaussian proposal with a Student's t proposal (Cappé et al. 2005, Chapter 9).
A stability result for Gaussian filters
ODE solvers are often characterised by the properties of their solution to the linear test equation
$$\begin{aligned} {\dot{y}}(t) = \lambda y(t), \ y(0) = 1, \end{aligned}$$
where \(\lambda \) is some complex number. A numerical solver is said to be A-stable if the approximate solution tends to zero for any fixed step size h whenever the real part of \(\lambda \) resides in the left half-plane (Dahlquist 1963). Recall that if \(y_0 \in {\mathbb {R}}^d\) and \(\varLambda \in {\mathbb {R}}^{d\times d}\) then the ODE \({\dot{y}}(t) = \varLambda y(t), \ y(0) = y_0\) is said to be asymptotically stable if \(\lim _{t \rightarrow \infty } y(t) = 0\), which is precisely when the real part of eigenvalues of \(\varLambda \) are in the left half-plane. That is, A-stability is the notion that a numerical solver preserves asymptotic stability of linear time-invariant ODEs.
While the present solvers are not designed to solve complex valued ODEs, a real system equivalent to Eq. (28) is given by
$$\begin{aligned} {\dot{y}}(t) = \varLambda _{\text {test}} y(t), \quad y^{\mathsf {T}}(0) = [1\ 0], \ \end{aligned}$$
where \(\lambda = \lambda _1 + i \lambda _2\) and
$$\begin{aligned} \varLambda _{\text {test}} = \begin{bmatrix} \lambda _1&\quad - \,\lambda _2 \\ \lambda _2&\quad \lambda _1 \end{bmatrix}. \end{aligned}$$
However, to leverage classical stability results from the theory of Kalman filtering we investigate a slightly different test equation, namely
$$\begin{aligned} {\dot{y}}(t) = \varLambda y(t),\ y(0) = y_0, \end{aligned}$$
where \(\varLambda \in {\mathbb {R}}^{d\times d}\) is of full rank. In this case, Eqs. (11) and (20) give the following recursion for \(\mu _n^P\)
$$\begin{aligned} \mu _{n+1}^P&= (A(h) - A(h) K_n H) \mu _n^P, \end{aligned}$$
$$\begin{aligned} \mu _n^F&= (\mathrm {I} - K_n H)\mu _n^P, \end{aligned}$$
where we recall that \(H = {\dot{C}}- C\varLambda \) and \(z_n = 0\). If there exists a limit gain \(\lim _{n\rightarrow \infty } K_n = K_\infty \) then asymptotic stability of the filter holds provided that the eigenvalues of \((A(h) - A(h) K_\infty H)\) are strictly within the unit circle (Anderson and Moore 1979, Appendix C, p. 341). That is, \(\lim _{n\rightarrow \infty } \mu _n^P = 0\) and as a direct consequence \(\lim _{n\rightarrow \infty } \mu _n^F = 0\).
We shall see that the Kalman filter using an IWP(q) prior is asymptotically stable. For the IWP(q) process on \({\mathbb {R}}^d\) we have \(u = 0\), \(L = \mathrm {e}_{q+1}\otimes \varGamma ^{1/2}\), and \(F = (\sum _{i=1}^q \mathrm {e}_i\mathrm {e}_{i+1}^{\mathsf {T}})\otimes \mathrm {I}\), where \(\mathrm {e}_i \in {\mathbb {R}}^d\) is the ith canonical eigenvector, \(\varGamma ^{1/2}\) is the symmetric square root of some positive semi-definite matrix \(\varGamma \in {\mathbb {R}}^{d\times d}\), \(\mathrm {I} \in {\mathbb {R}}^{d\times d}\) is the identity matrix, and \(\otimes \) is Kronecker's product. By using Eq. (8), the properties of Kronecker products and the definition of the matrix exponential the equivalent discrete-time system are given by
$$\begin{aligned} A(h)&= A^{(1)}(h) \otimes \mathrm {I}, \end{aligned}$$
$$\begin{aligned} \xi (h)&= 0, \end{aligned}$$
$$\begin{aligned} Q(h)&= Q^{(1)}(h) \otimes \varGamma , \end{aligned}$$
where \(A^{(1)}(h) \in {\mathbb {R}}^{(q+1)\times (q+1)}\) and \(Q^{(1)}(h) \in {\mathbb {R}}^{(q+1)\times (q+1)}\) are given by (Kersting et al. 2018, Appendix A)Footnote 2
$$\begin{aligned} A_{ij}^{(1)}(h)&= {\mathbb {I}}_{i \le j} \frac{h^{j-i}}{(j-i)!}, \end{aligned}$$
$$\begin{aligned} Q_{ij}^{(1)}(h)&= \frac{ h^{2q+3-i-j}}{(2q+3-i-j)(q+1-i)!(q+1-j)!}, \end{aligned}$$
and \({\mathbb {I}}_{i \le j}\) is an indicator function. Before proceeding, we need to introduce the notions of stabilisability and detectability from Kalman filtering theory. These notions can be found in Anderson and Moore (1979, Appendix C).
(Complete stabilisability) The pair [A, G] is completely stabilisable if \(w^{\mathsf {T}}G = 0\) and \(w^{\mathsf {T}}A = \eta w^{\mathsf {T}}\) for some constant \(\eta \) implies \( \mathinner { \!\left|\eta \right| } < 1\) or \(w = 0\).
(Complete detectability)Footnote 3 [A, H] is completely detectable if \([A^{\mathsf {T}},H^{\mathsf {T}}]\) is completely stabilisable.
Before we state the stability result of this section, the following two lemmas are useful.
Consider the discretised IWP(q) prior on \({\mathbb {R}}^d\) as given by Eq. (33). Let \(h > 0\) and \(\varGamma \) be positive definite. Then, the \(d\times d\) blocks of Q(h), denoted by \(Q_{i,j}(h),\ i,j = 1,2,\ldots ,q+1\) are of full rank.
From Eq. (33c), we have \(Q_{i,j}(h) = Q_{i,j}^{(1)}(h) \varGamma \). From Eq. (34b) and \(h> 0\), we have \(Q_{i,j}^{(1)}(h) > 0\), and since \(\varGamma \) is positive definite it is of full rank. It then follows that \(Q_{i,j}(h)\) is of full rank as well. \(\square \)
Let A(h) be the transition matrix of an IWP(q) prior as given by Eq. (33a) and \(h>0\), then A(h) has a single eigenvalue given by \(\eta = 1\). Furthermore, the right-eigenspace is given by
$$\begin{aligned} {\text {span}}[\mathrm {e}_1,\mathrm {e}_2,\ldots ,\mathrm {e}_d], \end{aligned}$$
where \(\mathrm {e}_i \in {\mathbb {R}}^{(q+1)d}\) are canonical basis vectors, and the left-eigenspace is given by
$$\begin{aligned} {\text {span}}[\mathrm {e}_{qd+1},\mathrm {e}_{qd+2},\ldots ,\mathrm {e}_{(q+1)d}]. \end{aligned}$$
Firstly, from Eqs. (33a) and (34a) it follows that A(h) is block upper-triangular with identity matrices on the block diagonal, hence the characteristic equation is given by
$$\begin{aligned} \det (A(h)-\eta \mathrm {I}) = (1-\eta )^{(q+1)d} = 0, \end{aligned}$$
we conclude that the only eigenvalue is \(\eta = 1\). To find the right-eigenspace let \(w^{\mathsf {T}}= [w_1^{\mathsf {T}},w_2^{\mathsf {T}},\ldots ,w_{q+1}^{\mathsf {T}}]\), \(w_i \in {\mathbb {R}}^d,\ i=1,2,\ldots ,q+1\) and solve \(A(h)w = w\), which by using Eqs. (33a) and (34a) can be written as
$$\begin{aligned} (A(h)w)_l = \sum _{r=0}^{q+1-l} \frac{h^r}{r!} w_{r+l}, \quad l = 1,2,\ldots ,q+1, \end{aligned}$$
where \((\cdot )_l\) is the lth sub-vector of dimension d. Starting with \(l = q+1\), we trivially have \(w_{q+1} = w_{q+1}\). For \(l = q\) we have \(w_q + w_{q+1}h = w_q\) but \(h > 0\), hence \(w_{q+1} = 0\). Similarly for \(l = q-1\) we have \(w_{q-1} = w_{q-1} + w_qh + w_{q+1} h^2/2 = w_{q-1} + w_q h + 0 \cdot h^2/2\). Again since \(h > 0\) we have \(w_q = 0\). By repeating this argument we have \(w_1 = w_1\) and \(w_i = 0,\ i = 2,3,\ldots ,q+1\). Therefore all eigenvectors w are of the form \(w^{\mathsf {T}}= [w_1^{\mathsf {T}},0^{\mathsf {T}},\ldots ,0^{\mathsf {T}}] \in {\text {span}}[\mathrm {e}_1,\mathrm {e}_2,\ldots ,\mathrm {e}_d]\). Similarly, for the left eigenspace we have
$$\begin{aligned} (w^{\mathsf {T}}A(h))_l = \sum _{r=0}^{l-1} \frac{h^r}{r!} w_{l-r}^{\mathsf {T}}, \quad l = 1,2,\ldots ,q+1. \end{aligned}$$
Starting with \(l = 1\) we have trivially that \(w_1^{\mathsf {T}}= w_1^{\mathsf {T}}\). For \(l = 2\) we have \(w_2^{\mathsf {T}}+ w_1^{\mathsf {T}}h = w_2^{\mathsf {T}}\) but \(h>0\), hence \(w_1=0\). For \(l = 3\) we have \(w_3^{\mathsf {T}}= w_3^{\mathsf {T}}+ w_2^{\mathsf {T}}h + w_1^{\mathsf {T}}h^2/2 = w_3^{\mathsf {T}}+ w_2^{\mathsf {T}}h + 0^{\mathsf {T}}\cdot h^2/2\) but \(h > 0\) hence \(w_2 =0\). By repeating this argument, we have \(w_i= 0, i=1,\ldots ,q\) and \(w_{q+1} = w_{q+1}\). Therefore, all left eigenvectors are of the form \(w^{\mathsf {T}}= [0^{\mathsf {T}},\ldots ,0^{\mathsf {T}},w_{q+1}^{\mathsf {T}}] \in {\text {span}}[\mathrm {e}_{qd+1},\mathrm {e}_{qd+2},\ldots ,\mathrm {e}_{(q+1)d}]\). \(\square \)
We are now ready to state the main result of this section. Namely, that the Kalman filter that produces exact inference in Eq. (7) for linear vector fields is asymptotically stable if the linear vector field is of full rank.
Let \(\varLambda \in {\mathbb {R}}^{d\times d}\) be a matrix with full rank and consider the linear ODE
$$\begin{aligned} {\dot{y}}(t) = \varLambda y(t). \end{aligned}$$
Consider estimating the solution of Eq. (38) using an IWP(q) prior with the same conditions on \(\varGamma \) as in Lemma 2. Then the Kalman filter estimate of the solution to Eq. (38) is asymptotically stable.
From Eq. (7), we have that the Kalman filter operates on the following system
$$\begin{aligned} X_{n+1}&= A(h) X_n + Q^{1/2}(h)W_{n+1}, \end{aligned}$$
$$\begin{aligned} Z_n&= H X_n, \end{aligned}$$
where \(H = [-\varLambda , \mathrm {I},0,\ldots ,0]\) and \(W_n\) are i.i.d. standard Gaussian vectors. It is sufficient to show that [A(h), H] is completely detectable and \([A(h),Q^{1/2}(h)]\) is completely stabilisable (Anderson and Moore 1979, Chapter 4, p. 77). We start by showing complete detectability. If we let \(w^{\mathsf {T}}= [w_1^{\mathsf {T}},\ldots ,w_{q+1}^{\mathsf {T}}]\), \(w_i \in {\mathbb {R}}^d,\ i=1,2,\ldots ,q+1\), then by Lemma 3 we have that \(w^{\mathsf {T}}A^{\mathsf {T}}(h) = \eta w^{\mathsf {T}}\) for some \(\eta \) implies that either \(w = 0\) or \(w^{\mathsf {T}}= [w_1^{\mathsf {T}},0^{\mathsf {T}},\ldots ,0^{\mathsf {T}}]\) for some \(w_1 \in {\mathbb {R}}^d\) and \(\eta = 1\). Furthermore, \(w^{\mathsf {T}}H^{\mathsf {T}}= -w_1^{\mathsf {T}}\varLambda ^{\mathsf {T}}+ w_2^{\mathsf {T}}= 0\) implies that \(w_2 = \varLambda w_1\). However, by the previous argument, we have \(w_2 = 0\); therefore, \(0 = \varLambda w_1\) but \(\varLambda \) is full rank by assumption so \(w_1 = 0\). Therefore, \([A^{\mathsf {T}}(h),H^{\mathsf {T}}]\) is completely detectable. As for complete stabilisability, again by Lemma 3, we have \(w^{\mathsf {T}}A(h) = \eta w^{\mathsf {T}}\) for some \(\eta \), which implies either \(w = 0\) or \(w^{\mathsf {T}}= [0^{\mathsf {T}},\ldots ,0^{\mathsf {T}},w_{q+1}^{\mathsf {T}}]\) and \(\eta = 1\). Furthermore, since the nullspace of \(Q^{1/2}(h)\) is the same as the nullspace of Q(h), we have that \(w^{\mathsf {T}}Q^{1/2}(h) = 0\) is equivalent to \(w^{\mathsf {T}}Q(h) = 0\), which is given by
$$\begin{aligned} w^{\mathsf {T}}Q(h) = \begin{bmatrix} w_{q+1}^{\mathsf {T}}Q_{q+1,1}(h)&\ldots&w_{q+1}^{\mathsf {T}}Q_{q+1,q+1}(h) \end{bmatrix} = 0, \end{aligned}$$
but by Lemma 2 the blocks \(Q_{i,j}(h)\) have full rank so \(w_{q+1} = 0\) and thus \(w = 0\). To conclude, we have that \([A(h),Q^{1/2}(h)]\) is completely stabilisable and [A(h), H] is completely detectable and therefore the Kalman filter is asymptotically stable.\(\square \)
Corollary 1
In the same setting as Theorem 2, the EKF and UKF are asymptotically stable.
Since the vector field is linear and therefore affine Lemma 1 implies that EKF and UKF reduce to the exact Kalman filter, which is asymptotically stable by Theorem 2.\(\square \)
It is worthwhile to note that \(\varLambda _{\text {test}}\) is of full rank for all \([\lambda _1\ \lambda _2]^{\mathsf {T}}\in {\mathbb {R}}^2 \setminus \{ 0 \}\), and consequently Theorem 2 and Corollary 1 guarantee A-stability for the EKF and UKF in the sense of Dahlquist (1963).Footnote 4 Lastly, a peculiar fact about Theorem 2 is that it makes no reference to the eigenvalues of \(\varLambda \) (i.e. the stability properties of the ODE). That is, the Kalman filter will be asymptotically stable even if the underlying ODE is not, provided that, \(\varLambda \) is of full rank. This may seem awkward but it is rarely the case that the ODE that we want to integrate is unstable, and even in such a case most solvers will produce an error that grows without a bound as well. Though all of the aforementioned properties are at least partly consequences of using IWP(q) as a prior and they may thus be altered by changing the prior.
Uncertainty calibration
In practice the model parameters, (F, u, L), might depend on some parameters that need to be estimated for the probabilistic solver to report appropriate uncertainty in the estimated solution to Eq. (1). The diffusion matrix L is of particular importance as it determines the gain of the Wiener process entering the system in Eq. (3) and thus determines how 'diffuse' the prior is. Herein we shall only concern ourselves with estimating L, though, one might anticipate future interest in estimating F and u as well. However, let us start with a few words on the monitoring of errors in numerical solvers in general.
Monitoring of errors in numerical solvers
An important aspect of numerical analysis is to monitor the error of a method. While the goal of probabilistic solvers is to do so by calibration of a probabilistic model, the approach of classical numerical analysis is to examine the local and global errors. The global error can be bounded but is typically impractical for monitoring error (Hairer et al. 1987, Chapter II.3). A more practical approach is to monitor (and control) the accumulation of local errors. This can be done by using two step sizes together with Richardson extrapolation (Hairer et al. 1987, Theorem 4.1). Though, perhaps more commonly this is done via embedded Runge–Kutta methods (Hairer et al. 1987, Chapter II.4) or the Milne device Byrne and Hindmarsh (1975).
In the context of filters, the relevant object in this regard is the scaled residual \(S_n^{-1/2}(z_n - {\hat{z}}_n)\). Due to its role in the prediction-error decomposition, which is defined below, it directly monitors the calibration of the predictive distribution. Schober et al. (2019) showed how to use this quantity to effectively control step sizes in practice. It was also recently shown in (Kersting et al. 2018, Section 7), that in the case of \(q=1\), fixed \(\sigma ^2\) (amplitude of the Wiener process) and Integrated Wiener Process prior, the posterior standard deviation computed by the solver of Schober et al. (2019) contracts at the same rate as the worst-case error as the step size goes to zero—thereby preventing both under- and overconfidence.
In the following, we discuss effective strategies for calibrating L when it is given by \(L = \sigma \breve{L}\) for fixed \(\breve{L}\), thus providing a probabilistic quantification of the error in the proposed solvers.
Uncertainty calibration for affine vector fields
As noted in Sect. 2.6, the Kalman filter produces the exact solution to the inference problem in Eq. (7) when the vector field is affine. Furthermore, the marginal likelihood \(p(z_{1:N})\) can be computed during the execution of the Kalman filter by the prediction error decomposition (Schweppe 1965), which is given by:
$$\begin{aligned} p(z_{1:N})&= p(z_1)\prod _{n=2}^N p(z_n\mid z_{1:n-1}) \nonumber \\&= \prod _{n=1}^N {\mathcal {N}}(z_n; {\hat{z}}_n,S_n). \end{aligned}$$
While the marginal likelihood in Eq. (40) is certainly straightforward to compute without adding much computational cost, maximising it is a different story in general. In the particular case when the diffusion matrix L and the initial covariance \(\varSigma _0\) are given by re-scaling fixed matrices \(L = \sigma \breve{L}\) and \(\varSigma _0 = \sigma ^2\breve{\varSigma }_0\) for some scalar \(\sigma > 0\), then uncertainty calibration can be done by a simple post-processing step after running the Kalman filter, as is shown in Proposition 4.
Let \(f(y,t) = \varLambda (t)y + \zeta (t)\), \(\varSigma _0 = \sigma ^2 \breve{\varSigma }_0\), \(L = \sigma \breve{L}\), \(R = 0\) and denote the equivalent discrete-time process noise covariance for the prior model \((F,u,\breve{L})\) by \(\breve{Q}(h)\). Then the Kalman filter estimate to the solution of
$$\begin{aligned} {\dot{y}}(t) = f(y(t),t) \end{aligned}$$
that uses the parameters \((\mu _0^F,\varSigma _0,A(h),\xi (h),Q(h))\) is equal to the Kalman filter estimate that uses the parameters \((\mu _0^F,\breve{\varSigma }_0,A(h),\xi (h),\breve{Q}(h))\). More specifically, if we denote the filter mean and covariance at time n using the former parameters by \((\mu _n^F,\varSigma _n^F)\) and the corresponding filter mean and covariance using the latter parameters by \((\breve{\mu }_n^F,\breve{\varSigma }_n^F)\), then \((\mu _n^F,\varSigma _n^F) = (\breve{\mu }_n^F,\sigma ^2\breve{\varSigma }_n^F)\). Additionally, denote the predicted mean and covariance of the measurement \(Z_n\) by \(\breve{z}_n\) and \(\breve{S}_n\), respectively, when using the parameters \((\mu _0^F,\breve{\varSigma }_0,A(h),\xi (h),\breve{Q}(h))\). Then the maximum likelihood estimate of \(\sigma ^2\), denoted by \(\widehat{\sigma ^2_N}\), is given by
$$\begin{aligned} \widehat{\sigma ^2_N} = \frac{1}{Nd} \sum _{n=1}^N (z_n - \breve{z}_n)^{\mathsf {T}}\breve{S}_n^{-1} (z_n - \breve{z}_n). \end{aligned}$$
Proposition 4 is just an amalgamation of statements from Tronarp et al. (2019). Nevertheless, we provide an accessible proof in "Appendix C".
Uncertainty calibration for non-affine vector fields
For non-affine vector fields, the issue of parameter estimation becomes more complicated. The Bayesian filtering problem is not solved exactly and consequently any marginal likelihood will be approximate as well. Nonetheless, a common approach in the Gaussian filtering framework is to approximate the marginal likelihood in the same manner as the filtering solution is approximated (Särkkä 2013, Chapter 12.3.3), that is:
$$\begin{aligned} p(z_{1:N}) \approx \prod _{n=1}^N {\mathcal {N}}(z_n; {\hat{z}}_n,S_n), \end{aligned}$$
where \({\hat{z}}_n\) and \(S_n\) are the quantities in Eq. (12) approximated by some method (e.g. EKF). Maximising Eq. (42) is a common approach in signal processing (Särkkä 2013) and referred to as quasi maximum likelihood in time series literature (Lindström et al. 2015). Both Eqs. (14) and (16) can be thought of as Kalman updates for the case where the vector field is approximated by a piece-wise affine function, without modifying \(\varSigma _0\), Q(h), and R. For instance the affine approximation of the vector field due to the EKF on the discretisation interval \([t_n,t_{n+1})\) is given by
$$\begin{aligned} {\hat{\zeta }}_n(t)&= f\big (C\mu _n^P,t_n\big ) - J_f\big (C\mu _n^P,t_n\big )C\mu _n^P, \end{aligned}$$
$$\begin{aligned} {\hat{\varLambda }}_n(t)&= J_f\big (C\mu _n^P,t_n\big ), \end{aligned}$$
$$\begin{aligned} {\hat{f}}_n(y,t)&= {\hat{\varLambda }}_n(t)y + {\hat{\zeta }}_n(t). \end{aligned}$$
While the vector field is approximated by a piece-wise affine function, the discrete-time filtering problem Eq. (7) is still simply an affine problem, without modifications of \(\varSigma _0\), Q(h), and R. Therefore, the results of Proposition 4 still apply and the \(\sigma ^2\) maximising the approximate marginal likelihood in Eq. (42) can be computed in the same manner as in Eq. (41).
On the other hand, it is clear that dependence on \(\sigma ^2\) in Eq. (12) is non-trivial in general, which is also true for the quadrature approaches of Sect. 2.5. Therefore, maximising Eq. (42) for the quadrature approaches is not as straightforward. However, by Taylor series expanding the vector field in Eq. (12) one can see that the numerical integration approaches are roughly equal to the Taylor series approaches provided that \(\breve{\varSigma }_n^P\) is small. Therefore, we opt for plugging in the corresponding quantities from the quadrature approximations into Eq. (41) in order to achieve computationally cheap calibration of these approaches.
A local calibration method for \(\sigma ^2\) is given by [Schober et al. 2019, Eq. (45)], which in fact corresponds to an h-dependent prior, with the diffusion matrix in Eq. (3) \(L = L(t)\) being piece-wise constant over integration steps. Moreover, Schober et al. (2019) had to neglect the dependence of \(\varSigma _n^P\) on the likelihood. Here we prefer the estimator given in Eq. (41) since it is attempting to maximise the likelihood from the globally defined probability model in Eq. (7), and it succeeds for affine vector fields.
More advanced methods for calibrating the parameters of the prior can be developed by combining the Gaussian smoothing equations (Särkkä 2013, Chapter 10) with the expectation maximisation method (Kokkala et al. 2014) or variational Bayes (Taniguchi et al. 2017).
Uncertainty calibration of particle filters
If calibration of Gaussian filters was complicated by having a non-affine vector field, the situation for particle filters is even more challenging. There is, to the authors' knowledge, no simple estimator of the scale of the Wiener process (such as Proposition 4) even for the case of affine vector fields. However, the literature on parameter estimation using particle methods is vast so we proceed to point the reader towards some alternatives. In the class of off-line methods, Schön et al. (2011) uses a particle smoother to implement an expectation maximisation algorithm, while Lindsten (2013) uses a particle Markov chain Monte Carlo methods to implement a stochastic approximation expectation maximisation algorithm. One can also use the iterated filtering method of Ionides et al. (2011) to get a maximum likelihood estimator, or particle Markov chain Monte Carlo (Andrieu et al. 2010).
On the other hand, if online calibration is required then the gradient based recursive maximum likelihood estimator by Doucet and Tadić (2003) can be used, or the online version of iterated filtering by Lindström et al. (2012). Furthermore, Storvik (2002) provides an alternative for online calibration when sufficient statics of the parameters are finite dimensional and can be computed recursively in n. An overview on parameter estimation using particle filters was also given by Kantas et al. (2009).
Experimental results
In this section, we evaluate the different solvers presented in this paper in different scenarios. Though before we proceed to the experiments we define some summary metrics with which assessments of accuracy and uncertainty quantification can be made. The root-mean-square error (RMSE) is often used to assess accuracy of filtering algorithms and is defined by
$$\begin{aligned} \mathrm {RMSE} = \sqrt{\frac{1}{N} \sum _{n=1}^N \mathinner { \!\left||y(nh) - C \mu ^F_n\right|| }^2 }. \end{aligned}$$
In fact \(y(nh) - C \mu ^F_n\) is precisely the global error at time \(t_n\) (Hairer et al. 1987, Eq. (3.16)). As for assessing the uncertainty quantification, the \(\chi ^2\)-statistics is commonly used (Bar-Shalom et al. 2001). That is, in a linear Gaussian model the following quantities
$$\begin{aligned} \big (y(nh) {-} C\mu _n^F\big )^{\mathsf {T}}[C \varSigma ^F_n C^{\mathsf {T}}]^{-1} \big (y(nh) {-} C\mu _n^F\big ), \ n{=}1,\ldots ,N, \end{aligned}$$
are i.i.d. \(\chi ^2(d)\). For a trajectory summary we define the average \( \chi ^2\)-statistics as
$$\begin{aligned} \bar{\chi ^2} = \frac{1}{N}\sum _{n=1}^N \big (y(nh) - C\mu _n^F\big )^{\mathsf {T}}[C \varSigma ^F_n C^{\mathsf {T}}]^{-1} \big (y(nh) - C\mu _n^F\big ). \end{aligned}$$
For an accurate and well-calibrated model, the RMSE is small and \(\bar{\chi ^2} \approx d\). In the succeeding discussion, we shall refer to a method producing \(\bar{\chi ^2} < d\) or \(\bar{\chi ^2} > d\) as underconfident or overconfident, respectively.
In this experiment, we consider a linear system given by
$$\begin{aligned} \varLambda&= \begin{bmatrix} \lambda _1&\quad -\,\lambda _2 \\ \lambda _2&\quad \lambda _1 \end{bmatrix}, \end{aligned}$$
$$\begin{aligned} {\dot{y}}(t)&= \varLambda y(t), \quad y(0) = \mathrm {e}_1. \end{aligned}$$
This makes for a good test model as the inference problem in Eq. (7) can be solved exactly, and consequently its adequacy can be assessed. We compare exact inference by the Kalman filter (KF)Footnote 5 (see Sect. 2.6) with the approximation due to Schober et al. (2019) (SCH) (see Proposition 2) and the covariance approximation due to Kersting and Hennig (2016) (KER) (see Proposition 3). The integration interval is set to [0, 10], and all methods use an IWP(q) prior for \(q=1,2,\ldots ,6\), and the initial mean is set to \({\mathbb {E}}[X^{(j)}(0)] = \varLambda ^{j-1}y(0)\) for \(j=1,\ldots ,q+1\), with variance set to zero (exact initialisation). The uncertainty of the methods is calibrated by the maximum likelihood method (see Proposition 4), and the methods are examined for 10 step sizes uniformly placed on the interval \([10^{-3},10^{-1}]\).
We examine the parameters \(\lambda _1 = 0\) and \(\lambda _2 = \pi \) (half a revolution per unit of time with no damping). The RMSE is plotted against step size in Fig. 1. It can be seen that SCH is a slightly better than KF and KER for \(q = 1\) and small step sizes, and KF becomes slightly better than SCH for large step size while KER becomes significantly worse than both KF and SCH. For \(q > 1\), it can be seen that the RMSE is significantly lower for KF than for SCH/KER in general with performance differing between one and two orders of magnitude. Particularly, the superior stability properties of KF are demonstrated (see Theorem 2) for \(q > 3\) where both SCH and KER produce massive errors for larger step sizes.
Furthermore, the average \(\chi ^2\)-statistic is shown in Fig. 2. All methods appear to be overconfident for \(q=1\) with SCH performing best, followed by KER. On the other hand, for \(1< q < 5\), SCH and KER remain overconfident for the most part, while KF is underconfident. Our experiments also show that unsurprisingly all methods perform better for smaller \(|\lambda _2|\) (frequency of the oscillation). However, we omit visualising this here.
Finally, a demonstration of the error trajectory for the first component of y and the reported uncertainty of the solvers is shown in Fig. 3 for \(h = 10^{-2}\) and \(q = 2\). Here it can be seen that all methods produce similar errors bars, though SCH and KER produce errors that oscillate far outside their reported uncertainties.
RMSE of KF, SCH, and KER on the undamped oscillator using IWP(q) priors for \(q=1,\ldots ,6\) plotted against step size
Average \(\chi ^2\)-statistic of KF, SCH, and KER on the undamped oscillator using IWP(q) priors for \(q=1,\ldots ,6\) plotted against step size. The expected \(\chi ^2\)-statistic is shown in black (E)
The errors (solid lines) and ± 2 standard deviation bands (dashed) for KF, SCH, and KER on the undamped oscillator with \(q=2\) and \(h = 10^{-2}\). A line at 0 is plotted in solid black
The logistic equation
In this experiment, the logistic equation is considered:
$$\begin{aligned} {\dot{y}}(t) = r y(t) \left( 1 - y(t)\right) , \quad y(0) = 1\cdot 10^{-1}, \end{aligned}$$
which has the solution:
$$\begin{aligned} y(t) = \frac{\exp (rt)}{1/y_0 -1 + \exp (rt)}. \end{aligned}$$
In the experiments, r is set to \(r = 3\). We compare the zeroth order solver (Proposition 2) (Schober et al. 2019) (SCH), the first-order solver in Eq. (16) (EKF), a numerical integration solver based on the covariance approximation in Proposition 3 (Kersting and Hennig 2016) (KER), and a numerical integration solver based on approximating Eq. (12) (UKF). Both numerical integration approaches use a third degree fully symmetric rule (see McNamee and Stenger 1967). The integration interval is set to [0, 2.5], and all methods use an IWP(q) prior for \(q=1,2,\ldots ,4\), and the initial mean of \(X^{(1)}\), \(X^{(2)}\), and \(X^{(3)}\) are set to y(0), f(y(0)), and \(J_f(y(0))f(y(0))\), respectively (correct values), with zero covariance. The remaining state components \(X^{(j)}, j>3\) are set to zero mean with unit variance. The uncertainty of the methods is calibrated by the quasi maximum likelihood method as explained in Sect. 4.3, and the methods are examined for 10 step sizes uniformly placed on the interval \([10^{-3},10^{-1}]\).
The RMSE is plotted against step size in Fig. 4. It can be seen that EKF and UKF tend to produce smaller errors by more than an order of magnitude than SCH and KER in general, with the notable exception of the UKF behaving badly for small step sizes and \(q = 4\). This is probably due to numerical issues for generating the integration nodes, which requires the computation of matrix square roots (Julier et al. 2000) that can become inaccurate for ill-conditioned matrices. Additionally, the average \(\chi ^2\)-statistic is plotted against step size in Fig. 5. Here it appears that all methods tend to be underconfident for \(q=1,2\), while SCH becomes overconfident for \(q=3,4\).
A demonstration of the error trajectory and the reported uncertainty of the solvers is shown in Fig. 3 for \(h = 10^{-1}\) and \(q = 2\). SCH and KER produce similar errors, and they are hard to discern in the figure. The same goes for EKF and UKF. Additionally, it can be seen that the solvers produce qualitatively different uncertainty estimates. While the uncertainty of EKF and UKF first grows to then shrink as the solution approaches the fixed point at \(y(t) = 1\), the uncertainty of SCH grows over the entire interval with the uncertainty of KER growing even faster (Fig. 6).
RMSE of SCH, EKF, KER, and UKF on the logistic equation using IWP(q) priors for \(q=1,\ldots ,4\) plotted against step size
Average \(\chi ^2\)-statistic of SCH, EKF, KER, and UKF on the logistic equation using IWP(q) priors for \(q=1,\ldots ,4\) plotted against step size. The expected \(\chi ^2\)-statistic is shown in black (E)
The errors (solid lines) and ± 2 standard deviation bands (dashed) for EKF, SCH, KER, and UKF on the logistic with \(q=2\) and \(h = 10^{-1}\). A line at 0 is plotted in solid black
The FitzHugh–Nagumo model
The FitzHugh–Nagumo model is given by:
$$\begin{aligned} \begin{bmatrix} {\dot{y}}_1(t) \\ {\dot{y}}_2(t) \end{bmatrix} = \begin{bmatrix} c\left( y_1(t) - \frac{y_1^3(t)}{3} + y_2(t) \right) \\ - \frac{1}{c}\left( y_1(t) - a + by_2(t) \right) \end{bmatrix}, \end{aligned}$$
where we set \((a,b,c) = (.2,.2,3)\) and \(y(0) = [-\,1\ 1]^{\mathsf {T}}\). As previous experiments showed that the behaviour of KER and UKF are similar to SCH and EKF, respectively, we opt for only comparing the latter to increase readability of the presented results. As previously, the moments of \(X^{(1)}(0)\), \(X^{(2)}(0)\), and \(X^{(3)}(0)\) are initialised to their exact values and the remaining derivatives are initialised with zero mean and unit variance. The integration interval is set to [0, 20], and all methods use an IWP(q) prior for \(q=1,\ldots ,4\) and the uncertainty is calibrated as explained in Sect. 4.3. A baseline solution is computed using MATLAB's ode45 function with an absolute tolerance of \(10^{-15}\) and relative tolerance of \(10^{-12}\), all errors are computed under the assumption that ode45 provides the exact solution. The methods are examined for 10 step sizes uniformly placed on the interval \([10^{-3},10^{-1}]\).
The RMSE is shown in Fig. 7. For \(q=1\), EKF produces an error orders of magnitude larger than SCH and for \(q = 2\) both methods produce similar errors until the step size grows too large, causing SCH to start producing orders of magnitude larger errors than EKF. For \(q=3,4\) EKF is superior in producing lower errors and additionally SCH can be seen to become unstable for larger step sizes (at \(h \approx 5\cdot 10^{-2}\) for \(q=3\) and at \(h \approx 2\cdot 10^{-2}\) for \(q=4\)). Furthermore, the averaged \(\chi ^2\)-statistic is shown in Fig. 8. It can be seen that EKF is overconfident for \(q=1\) while SCH is underconfident. For \(q=2\), both methods are underconfident while EKF remains underconfident for \(q=3,4\), but SCH becomes overconfident for almost all step sizes.
The error trajectory for the first component of y and the reported uncertainty of the solvers is shown in Fig. 9 for \(h = 5\cdot 10^{-2}\) and \(q = 2\). It can be seen that both methods have periodically occurring spikes in their errors with EKF being larger in magnitude but also briefer. However, the uncertainty estimate of the EKF is also spiking at the same time giving an adequate assessments of its error. On the other hand, the uncertainty estimate of SCH grows slowly and monotonically over the integration interval, with the error estimate going outside the two standard deviation region at the first spike (slightly hard to see in the figure).
RMSE of SCH and EKF on the FitzHugh–Nagumo model using IWP(q) priors for \(q=1,\ldots ,4\) plotted against step size
Average \(\chi ^2\)-statistic of SCH and EKF on the FitzHugh–Nagumo model using IWP(q) priors for \(q=1,\ldots ,4\) plotted against step size
The errors (solid lines) and ± 2 standard deviation bands (dashed) for EKF and SCH on the FitzHugh–Nagumo model with \(q=2\) and \(h = 5\cdot 10^{-2}\). A line at 0 is plotted in solid black
A Bernoulli equation
In this following experiment, we consider a transformation of Eq. (45), \(\eta (t) = \sqrt{y(t)}\), for \(r = 2\). The resulting ODE for \(\eta (t)\) now has two stable equilibrium points \(\eta (t) = \pm 1\) and an unstable equilibrium point at \(\eta (t) = 0\). This makes it a simple test domain for different sampling-based ODE solvers, because different types of posteriors ought to arise. We compare the proposed particle filter using both the proposal Eq. (24) [PF(1)] and EKF proposals [Eq. (25)] [PF(2)] with the method by Chkrebtii et al. (2016) (CHK) and the one by Conrad et al. (2017) (CON) for estimating \(\eta (t)\) on the interval \(t\in [0,5]\) with initial condition set to \(\eta _0 = 0\). Both PF and CHK use and IWP(q) prior and set \(R = \kappa h^{2q+1}\). CON uses a Runge–Kutta method of order q with perturbation variance \(h^{2q+1}/[2q(q!)^2]\) as to roughly match the incremental variance of the noise entering PF(1), PF(2), and CHK, which is determined by Q(h) and not R.
First we attempt to estimate \(y(5)=0\) for 10 step sizes uniformly placed on the interval \([10^{-3},10^{-1}]\) with \(\kappa = 1\) and \(\kappa = 10^{-10}\). All methods use 1000 samples/particles, and they estimate y(5) by taking the mean over samples/empirical measures. The estimate of y(5) is plotted against the step size in Fig. 10. In general, the error increases with the step size for all methods, though most easily discerned in Fig. 10b, d . All in all, it appears that CHK, PF(1), and PF(2) behave similarly with regards to the estimation, while CON appears to produce a bit larger errors. Furthermore, the effect of \(\kappa \) appears to be the greatest on PF(1) and PF(2) as best illustrated in Fig. 10c.
Additionally, kernel density estimates for the different methods are made for time points \(t=1,3,5\) for \(\kappa =1\), \(q=1,2\) and \(h=10^{-1},5\cdot 10^{-2}\). In Fig. 11 kernel density estimates for \(h = 10^{-1}\) are shown. At \(t = 1\), all methods produce fairly concentrated unimodal densities that then disperse as time goes on, with CON being a least concentrated and dispersing quicker followed by PF(1)/PF(2) and then last CHK. Furthermore, CON goes bimodal as time goes on, which is best seen in for \(q=1\) in Fig. 11e. On the other hand, the alternatives vary between unimodal (CHK in 11f, also to some degree PF(1) and PF(2)), bimodal (PF(1) and CHK in Fig. 11e), and even mildly trimodal (PF(2) in Fig. 11e).
Similar behaviour of the methods is observed for \(h = 5 \cdot 10^{-2}\) in Fig. 11, though here all methods are generally more concentrated (Fig. 12).
Fig. 10
Sample mean estimate of the solution at \(T = 5\)
Kernel density estimates of the solution of the Bernoulli equation for \(h = 10^{-1}\) and \(\kappa = 1\). Mind the different scale of the axes
Kernel density estimates of the solution of the Bernoulli equation for \(h = 5\cdot 10^{-2}\) and \(\kappa = 1\). Mind the different scale of the axes
Conclusion and discussion
In this paper, we have presented a novel formulation of probabilistic numerical solution of ODEs as a standard problem in GP regression with a nonlinear measurement function, and with measurements that are identically zero. The new model formulation enables the use of standard methods in signal processing to derive new solvers, such as EKF, UKF, and PF. We can also recover many of the previously proposed sequential probabilistic ODE solvers as special cases.
Additionally, we have demonstrated excellent stability properties of the EKF and UKF on linear test equations, that is, A-stability has been established. The notion of A-stability is closely connected with the solution of stiff equations, which is typically achieved with implicit or semi-implicit methods (Hairer and Wanner 1996). In this respect, our methods (EKF and UKF) most closely fit into the class of semi-implicit methods such as the methods of Rosenbrock type (Hairer and Wanner 1996, Chapter IV.7). Though it does seem feasible, the proposed methods can be nudged towards the class of implicit methods by means of iterative Gaussian filtering (Bell and Cathey 1993; Garcia-Fernandez et al. 2015; Tronarp et al. 2018).
While the notion of A-stability has been fairly successful in discerning between methods with good and bad stability properties, it is not the whole story (Alexander 1977, Section 3). This has lead to other notions of stability such as L-stability and B-stability (Hairer and Wanner 1996, Chapter IV.3 and IV.12). It is certainly an interesting question whether the present framework allows for the development of methods satisfying these more strict notions of stability.
An advantage of our model formulation is the decoupling of the prior from the likelihood. Thus future work would involve investigating how well the exact posterior to our inference problem approximates the ODE and then analysing how well different approximate inference strategies behave. However, for \(h\rightarrow 0\), we expect that the novel Gaussian filters (EKF,UKF) will exhibit polynomial worst-case convergence rates of the mean and its credible intervals, that is, its Bayesian uncertainty estimates, as has already been proved in Kersting et al. (2018) for 0th order Taylor series filters with arbitrary constant measurement variance R (see Sect. 2.4).
Our Bayesian recast of ODE solvers might also pave the way towards an average-case analysis of these methods, which has already been executed in Ritter (2000) for the special case of Bayesian quadrature. For the PF, a thorough convergence analysis similar to Chkrebtii et al. (2016), Conrad et al. (2017), Abdulle and Garegnani (2018) and Del Moral (2004) appears feasible. However, the results on spline approximations for ODEs (see, e.g. Loscalzo and Talbot 1967) might also apply to the present methodology via the correspondence between GP regression and spline function approximations (Kimeldorf and Wahba 1970).
Here 'equivalent' is used in the sense that the probability distribution of the continuous-time process evaluated on the grid coincides with the probability distribution of the discrete-time process (Särkkä 2006, P. 17).
Note that Kersting et al. (2018) uses indexing \(i,j=0,\ldots ,q\) while we here use \(i,j=1,\ldots ,q+1\).
Anderson and Moore (1979) denotes the measurement matrix by \(H^{\mathsf {T}}\) while we denote it by H. With this in mind, our notion of complete detectability does not differ from Anderson and Moore (1979).
Some authors require stability on the line \(\lambda _1 = 0\) as well (Hairer and Wanner 1996). Due to the exclusion of origin EKF and UKF cannot be said to be A-stable in this sense.
Again note that the EKF and appropriate numerical quadrature methods are equivalent to this estimator here (see Lemma 1).
Abdulle, A., Garegnani, G.: Random time step probabilistic methods for uncertainty quantification in chaotic and geometric numerical integration (2018). arXiv:1703.03680 [mathNA]
Alexander, R.: Diagonally implicit Runge–Kutta methods for stiff ODEs. SIAM J. Numer. Anal. 14(6), 1006–1021 (1977)
Article MathSciNet MATH Google Scholar
Anderson, B., Moore, J.: Optimal Filtering. Prentice-Hall, Englewood Cliffs (1979)
MATH Google Scholar
Andrieu, C., Doucet, A., Holenstein, R.: Particle Markov chain Monte Carlo methods. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 72(3), 269–342 (2010)
Bar-Shalom, Y., Li, X.R., Kirubarajan, T.: Estimation with Applications to Tracking and Navigation: Theory, Algorithms and Software. Wiley, New York (2001)
Bell, B.M., Cathey, F.W.: The iterated Kalman filter update as a Gauss–Newton method. IEEE Trans. Autom. Control 38(2), 294–297 (1993)
Briol, F.X., Oates, C.J., Girolami, M., Osborne, M.A., Sejdinovic, D.: Probabilistic integration: a role for statisticians in numerical analysis? (with discussion and rejoinder). Stat. Sci. 34(1), 1–22 (2019). (Rejoinder on pp 38–42)
Article MATH Google Scholar
Butcher, J.C.: Numerical Methods for Ordinary Differential Equations, 2nd edn. Wiley, New York (2008)
Book MATH Google Scholar
Byrne, G.D., Hindmarsh, A.C.: A polyalgorithm for the numerical solution of ordinary differential equations. ACM Trans. Math. Softw. 1(1), 71–96 (1975)
Calderhead, B., Girolami, M., Lawrence, N.D.: Accelerating Bayesian inference over nonlinear differential equations with Gaussian processes. In: Koller, D., Schuurmans, D., Bengio, Y., Bottou, L. (eds.) Advances in Neural Information Processing Systems 21 (NIPS), pp. 217–224. Curran Associates, Inc. (2009)
Cappé, O., Moulines, E., Rydén, T.: Inference in Hidden Markov Models. Springer, Berlin (2005)
Chen, T.Q., Rubanova, Y., Bettencourt, J., Duvenaud, D.K.: Neural ordinary differential equations. In: Bengio, S., Wallach, H., Larochelle, H.,Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31 (NIPS), pp. 6571–6583. Curran Associates, Inc. (2018)
Chkrebtii, O.A., Campbell, D.A., Calderhead, B., Girolami, M.A.: Bayesian solution uncertainty quantification for differential equations. Bayesian Anal. 11(4), 1239–1267 (2016)
Cockayne, J., Oates, C., Sullivan, T., Girolami, M.: Bayesian probabilistic numerical methods. Siam Rev. (2019). (to appear)
Conrad, P.R., Girolami, M., Särkkä, S., Stuart, A., Zygalakis, K.: Statistical analysis of differential equations: introducing probability measures on numerical solutions. Stat. Comput. 27(4), 1065–1082 (2017)
Crisan, D., Doucet, A.: A survey of convergence results on particle filtering methods for practitioners. IEEE Trans. Signal Process. 50(3), 736–746 (2002)
Dahlquist, G.G.: A special stability problem for linear multistep methods. BIT Numer. Math. 3(1), 27–43 (1963)
Del Moral, P.: Feynman–Kac Formulae: Genealogical and Interacting Particle Systems with Applications. Springer, Berlin (2004)
Deuflhard, P., Bornemann, F.: Scientific Computing with Ordinary Differential Equations. Springer, Berlin (2002)
Doucet, A., Tadić, V.B.: Parameter estimation in general state-space models using particle methods. Ann. Inst. Stat. Math. 55(2), 409–422 (2003)
MathSciNet MATH Google Scholar
Doucet, A., Godsill, S., Andrieu, C.: On sequential Monte Carlo sampling methods for Bayesian filtering. Stat. Comput. 10(3), 197–208 (2000)
Doucet, A., De Freitas, N., Gordon, N.: An introduction to sequential Monte Carlo methods. In: Sequential Monte Carlo methods in practice, pp 3–14. Springer, New York (2001)
Garcia-Fernandez, A.F., Svensson, L., Morelande, M.R., Särkkä, S.: Posterior linearization filter: principles and implementation using sigma points. IEEE Trans. Signal Process. 63(20), 5561–5573 (2015)
Golub, G.H., Welsch, J.H.: Calculation of Gauss quadrature rules. Math. Comput. 23(106), 221–230 (1969)
Grewal, M.S., Andrews, A.P.: Kalman Filtering: Theory and Practice Using MATLAB. Wiley, New York (2001)
Hairer, E., Wanner, G.: Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems. Springer Series in Computational Mathematics, vol. 14. Springer, Berlin (1996)
Hairer, E., Nørsett, S., Wanner, G.: Solving Ordinary Differential Equations I—Nonstiff Problems. Springer, Berlin (1987)
Hartikainen, J., Särkkä, S.: Kalman filtering and smoothing solutions to temporal Gaussian process regression models. In: IEEE International Workshop on Machine Learning for Signal Processing (MLSP), pp 379–384 (2010)
Hennig, P., Hauberg, S.: Probabilistic solutions to differential equations and their application to Riemannian statistics. In: Proceedings of the 17th International Conference on Artificial Intelligence and Statistics (AISTATS), JMLR, W&CP, vol. 33 (2014)
Hennig, P., Osborne, M., Girolami, M.: Probabilistic numerics and uncertainty in computations. Proc. R. Soc. Lond. A Math. Phys. Eng. Sci. 471, 2179 (2015)
Hochbruck, M., Ostermann, A., Schweitzer, J.: Exponential Rosenbrock-type methods. SIAM J. Numer. Anal. 47(1), 786–803 (2009)
Ionides, E.L., Bhadra, A., Atchadé, Y., King, A., et al.: Iterated filtering. Ann. Stat. 39(3), 1776–1802 (2011)
Jazwinski, A.: Stochastic Processes and Filtering Theory. Academic Press, London (1970)
Julier, S., Uhlmann, J., Durrant-Whyte, H.F.: A new method for the nonlinear transformation of means and covariances in filters and estimators. IEEE Trans. Autom. Control 45(3), 477–482 (2000)
Kantas, N., Doucet, A., Singh, S.S., Maciejowski, J.M.: An overview of sequential Monte Carlo methods for parameter estimation in general state-space models. IFAC Proc. Vol. 42(10), 774–785 (2009)
Kennedy, M., O'Hagan, A.: Bayesian calibration of computer models. J. R. Stat. Soc. Ser. B 63(3), 425–464 (2002)
Kersting, H., Hennig, P.: Active uncertainty calibration in Bayesian ODE solvers. In: 32nd Conference on Uncertainty in Artificial Intelligence (UAI 2016), pp. 309–318. Curran Associates, Inc. (2016)
Kersting, H., Sullivan, T., Hennig, P.: Convergence rates of Gaussian ODE filters (2018). arXiv:1807.09737 [mathNA]
Kimeldorf, G., Wahba, G.: A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. Ann. Math. Stat. 41(2), 495–502 (1970)
Kokkala, J., Solin, A., Särkkä, S.: Expectation maximization based parameter estimation by sigma-point and particle smoothing. In: 2014 17th International Conference on Information Fusion (FUSION), pp 1–8. IEEE (2014)
Lie, H., Stuart, A., Sullivan, T.: Strong convergence rates of probabilistic integrators for ordinary differential equations. Stat. Comput. (2019). https://doi.org/10.1007/s11222-019-09898-6
Lindsten, F.: An efficient stochastic approximation EM algorithm using conditional particle filters. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2013, pp 6274–6278. IEEE (2013)
Lindström, E., Ionides, E., Frydendall, J., Madsen, H.: Efficient iterated filtering. IFAC Proc. Vol. 45(16), 1785–1790 (2012)
Lindström, E., Madsen, H., Nielsen, J.N.: Statistics for Finance. Chapman and Hall, London (2015)
Loscalzo, F., Talbot, T.: Spline function approximations for solutions of ordinary differential equations. SIAM J. Numer. Anal. 4, 3 (1967)
Magnani, E., Kersting, H., Schober, M., Hennig, P.: Bayesian filtering for ODEs with bounded derivatives (2017). arXiv:1709.08471 [csNA]
McNamee, J., Stenger, F.: Construction of fully symmetric numerical integration formulas. Numer. Math. 10(4), 327–344 (1967)
Øksendal, B.: Stochastic Differential Equations: An Introduction with Applications, 5th edn. Springer, Berlin (2003)
Paul, S., Chatzilygeroudis, K., Ciosek, K., Mouret, J.B., Osborne, M.A., Whiteson, S.: Alternating optimisation and quadrature for robust control. In: AAAI Conference on Artificial Intelligence (2018)
Prüher, J., Šimandl, M.: Bayesian quadrature in nonlinear filtering. In: 12th International Conference on Informatics in Control, Automation and Robotics (ICINCO), vol. 01, pp. 380–387 (2015)
Rasmussen, C., Williams, C.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)
Ritter, K.: Average-Case Analysis of Numerical Problems. Springer, Berlin (2000)
Rosenbrock, H.H.: Some general implicit processes for the numerical solution of differential equations. Comput. J. 5(4), 329–330 (1963)
Särkkä, S.: Recursive Bayesian inference on stochastic differential equations. Ph.D. thesis, Helsinki University of Technology (2006)
Särkkä, S.: Bayesian Filtering and Smoothing. Institute of Mathematical Statistics Textbooks. Cambridge University Press, Cambridge (2013)
Schober, M., Duvenaud, D., David, K., Hennig, P.: Probabilistic ODE solvers with Runge–Kutta means. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, N.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 27 (NIPS), pp. 739–747. Curran Associates, Inc. (2014)
Schober, M., Särkkä, S., Hennig, P.: A probabilistic model for the numerical solution of initial value problems. Stat. Comput. 29(1), 99–122 (2019)
Schön, T.B., Wills, A., Ninness, B.: System identification of nonlinear state-space models. Automatica 47(1), 39–49 (2011)
Schweppe, F.: Evaluation of likelihood functions for Gaussian signals. IEEE Trans. Inf. Theory 11(1), 61–70 (1965)
Skilling, J.: Bayesian solution of ordinary differential equations. In: Smith, C.R., Erickson, G.J., Neudorfer, P.O. (eds.) Maximum Entropy and Bayesian Methods, pp. 23–37. Springer, Dordrecht (1992)
Chapter MATH Google Scholar
Storvik, G.: Particle filters for state-space models with the presence of unknown static parameters. IEEE Trans. Signal Process. 50(2), 281–289 (2002)
Article MathSciNet Google Scholar
Taniguchi, A., Fujimoto, K, Nishida, Y.: On variational Bayes for identification of nonlinear state-space models with linearly dependent unknown parameters. In: 56th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE), pp. 572–576. IEEE (2017)
Teymur, O., Zygalakis, K., Calderhead, B.: Probabilistic linear multistep methods. In: Lee, D.D., Sugiyama, M. Luxburg, U.V. Guyon, I., Garnett, R. (eds.) Advances in Neural Information Processing Systems 29 (NIPS), pp. 4321–4328. Curran Associates, Inc. (2016)
Teymur, O., Lie, HC., Sullivan, T., Calderhead, B.: Implicit probabilistic integrators for ODEs. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31 (NIPS), pp. 7244–7253. Curran Associates, Inc. (2018)
Tronarp, F., Garcia-Fernandez, A.F., Särkkä, S.: Iterative filtering and smoothing in non-linear and non-Gaussian systems using conditional moments. IEEE Signal Process. Lett. 25(3), 408–412 (2018). https://doi.org/10.1109/LSP.2018.2794767
Tronarp, F., Karvonen, T., Särkkä, S.: Student's \( t \)-filters for noise scale estimation. IEEE Signal Process. Lett. 26(2), 352–356 (2019)
Wang, J., Cockayne, J., Oates, C.: On the Bayesian solution of differential equations. In: Proceedings of the 38th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering (2018)
Zhang, J., Mokhtari, A., Sra, S., Jadbabaie, A.: Direct Runge–Kutta discretization achieves acceleration. In: Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Prcessing Systems 31 (NIPS), pp. 3900–3909. Curran Associates, Inc. (2018)
Open access funding provided by Aalto University. This material was developed, in part, at the Prob Num 2018 workshop hosted by the Lloyd's Register Foundation programme on Data-Centric Engineering at the Alan Turing Institute, UK, and supported by the National Science Foundation, USA, under Grant DMS-1127914 to the Statistical and Applied Mathematical Sciences Institute. Any opinions, findings, conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the above-named funding bodies and research institutions. Filip Tronarp gratefully acknowledge financial support by Aalto ELEC Doctoral School. Additionally, Filip Tronarp and Simo Särkkä gratefully acknowledge financial support by Academy of Finland Grant #313708. Hans Kersting and Philipp Hennig gratefully acknowledge financial support by the German Federal Ministry of Education and Research through BMBF Grant 01IS18052B (ADIMEM). Philipp Hennig also gratefully acknowledges support through ERC StG Action 757275/PANAMA. Finally, the authors would like to thank the editor and the reviewers for their help in improving the quality of this manuscript.
Department of Electrical Engineering and Automation, Aalto University, 02150, Espoo, Finland
Filip Tronarp & Simo Särkkä
University of Tübingen and Max Planck Institute for Intelligent Systems, 72076, Tübingen, Germany
Hans Kersting & Philipp Hennig
Filip Tronarp
Hans Kersting
Simo Särkkä
Philipp Hennig
Correspondence to Filip Tronarp.
Proof of Proposition 1
In this section, we prove Proposition 1. First note that, by Eq. (4), we have
$$\begin{aligned} \frac{{\mathrm{d}}{\mathbb {C}}\big [X^{(1)}(t),X^{(2)}(s)\big ]}{{\mathrm{d}}t} = {\mathbb {C}}\big [X^{(2)}(t),X^{(2)}(s)\big ], \end{aligned}$$
where \({\mathbb {C}}\) is the cross-covariance operator. That is the cross-covariance matrix between \(X^{(1)}(t)\) and \(X^{(2)}(t)\) is just the integral of the covariance matrix function of \(X^{(2)}\). Now define
$$\begin{aligned} ({\mathbf {X}}^{(i)})^{\mathsf {T}}&= \begin{bmatrix} \big (X^{(i)}_1\big )^{\mathsf {T}}&\dots&\big (X^{(i)}_N\big )^{\mathsf {T}}\end{bmatrix}, \ i=1,\dots ,q+1, \end{aligned}$$
$$\begin{aligned} {\mathbf {g}}^{\mathsf {T}}&= \begin{bmatrix} g^{\mathsf {T}}(h)&\dots&g^{\mathsf {T}}(Nh) \end{bmatrix},\end{aligned}$$
$$\begin{aligned} {\mathbf {z}}^{\mathsf {T}}&= \begin{bmatrix} z_1^{\mathsf {T}}&\dots&z_N^{\mathsf {T}}\end{bmatrix}. \end{aligned}$$
Since Equation (3) defines a Gaussian process, we have that \({\mathbf {X}}^{(1)}\) and \({\mathbf {X}}^{(2)}\) are jointly Gaussian distributed and from Eq. (48) the blocks of \({\mathbb {C}}[{\mathbf {X}}^{(1)},{\mathbf {X}}^{(2)}]\) are given by
$$\begin{aligned} {\mathbb {C}}\big [{\mathbf {X}}^{(1)},{\mathbf {X}}^{(2)}\big ]_{n,m} = \int _0^{nh} {\mathbb {C}}\big [X^{(2)}(t),X^{(2)}(mh)\big ] {\mathrm{d}}t \end{aligned}$$
which is precisely the kernel mean, with respect to the Lebesgue measure on [0, nh], evaluated at mh, see (Briol et al. 2019, Section 2.2). Furthermore,
$$\begin{aligned} {\mathbb {V}}\big [{\mathbf {X}}^{(2)}\big ]_{n,m} = {\mathbb {C}}\big [X^{(2)}(nh),X^{(2)}(mh)\big ], \end{aligned}$$
that is, the covariance matrix function (referred to as kernel matrix in Bayesian quadrature literature (Briol et al. 2019)) evaluated at all pairs in \(\{h,\dots ,Nh\}\). From Gaussian conditioning rules, we have for the conditional means and covariance matrices given \({\mathbf {X}}^{(2)} - {\mathbf {g}} = 0\), denoted by \({\mathbb {E}}_{{\mathcal {D}}}[X^{(1)}(nh)]\) and \({\mathbb {V}}_{{\mathcal {D}}}[X^{(1)}(nh)]\), respectively, that
$$\begin{aligned} {\mathbb {E}}_{{\mathcal {D}}}\big [X^{(1)}(nh)\big ]&= {\mathbb {E}}\big [X^{(1)}(nh)\big ] + {\mathbf {w}}_n\big ({\mathbf {z}} + {\mathbf {g}} - {\mathbb {E}}\big [{\mathbf {X}}^{(2)}\big ]\big )\\&= {\mathbb {E}}\big [X^{(1)}(nh)\big ] + {\mathbf {w}}_n\big ({\mathbf {g}} - {\mathbb {E}}\big [{\mathbf {X}}^{(2)}\big ]\big ), \\ {\mathbb {V}}_{{\mathcal {D}}}\big [X^{(1)}(nh)\big ]&= {\mathbb {V}}\big [X^{(1)}(nh)\big ] - {\mathbf {w}}_n{\mathbb {V}}\big [{\mathbf {X}}^{(2)}\big ]{\mathbf {w}}_n^{\mathsf {T}}, \end{aligned}$$
where we used the fact that \({\mathbf {z}} = 0\) by definition and \({\mathbf {w}}_n\) are the Bayesian quadrature weights associated to the integral of g over the domain [0, nh], given by (see Briol et al. 2019, Proposition 1)
$$\begin{aligned} {\mathbf {w}}_n^{\mathsf {T}}= {\mathbb {V}}\big [{\mathbf {X}}^{(2)}\big ]^{-1} \begin{bmatrix} {\mathbb {C}}\big [X^{(1)}(nh),X^{(2)}(h)\big ]^{\mathsf {T}}\\ \vdots \\ {\mathbb {C}}\big [X^{(1)}(nh),X^{(2)}(Nh)\big ]^{\mathsf {T}}\end{bmatrix}. \end{aligned}$$
\(\square \)
To prove Proposition 3, expand the expressions for \(S_n\) and \(K_n\) as given by Eq. (12):
$$\begin{aligned} S_n&= {\dot{C}} \varSigma _n^P {\dot{C}}^{\mathsf {T}}+ {\mathbb {V}}\big [f(CX_n,t_n)\mid z_{1:n-1}\big ] \\&\quad - {\dot{C}}{\mathbb {C}}\big [X_n,f(CX_n,t_n)\mid z_{1:n-1}\big ] \\&\quad - {\mathbb {C}}\big [X_n,f(CX_n,t_n)\mid z_{1:n-1}\big ]^{\mathsf {T}}{\dot{C}}^{\mathsf {T}}\\&\approx {\dot{C}} \varSigma _n^P {\dot{C}}^{\mathsf {T}}+ {\mathbb {V}}\big [f(CX_n,t_n)\mid z_{1:n-1}\big ] \\ K_n&= \big (\varSigma _n^P {\dot{C}}^{\mathsf {T}}- {\mathbb {C}}\big [X_n,f(CX_n,t_n)\mid z_{1:n-1}\big ]\big )S_n^{-1} \\&\approx \varSigma _n^P {\dot{C}}^{\mathsf {T}}S_n^{-1}, \end{aligned}$$
where in the second steps the approximation \({\mathbb {C}}[X_n,f(CX_n,t_n)\mid z_{1:n-1}] \approx 0\) was used. Lastly, recall that \(z_n \triangleq 0\); hence, the update equations become
$$\begin{aligned} S_n&\approx {\dot{C}} \varSigma _n^P {\dot{C}}^{\mathsf {T}}+ {\mathbb {V}}\big [f(CX_n,t_n)\mid z_{1:n-1}\big ], \end{aligned}$$
$$\begin{aligned} K_n&\approx \varSigma _n^P {\dot{C}}^{\mathsf {T}}S_n^{-1},\end{aligned}$$
$$\begin{aligned} \mu _n^F&\approx \mu _n^P + K_n\big ( {\mathbb {E}}\big [f(CX_n,t_n)\mid z_{1:n-1}\big ] - {\dot{C}}\mu _n^P\big ), \end{aligned}$$
When \({\mathbb {E}}[f(CX_n,t_n)\mid z_{1:n-1}]\) and \({\mathbb {V}}[f(CX_n,t_n)\mid z_{1:n-1}]\) are approximated by Bayesian quadrature using a squared exponential kernel and a uniform set of nodes translated and scaled by \(\mu _n^P\) and \(\varSigma _n^P\), respectively, the method of Kersting and Hennig (2016) is obtained. \(\square \)
Note that \((\breve{\mu }_n^F,\breve{\varSigma }_n^F)\) is the output of a misspecified Kalman filter (Tronarp et al. 2019, Algorithm 1). We indicate that a quantity from Eqs. (11) and (12) is computed by the misspecified Kalman filter by \(\breve{}\). For example \(\breve{\mu }_n^P\) is the predictive mean of the misspecified Kalman filter. If \(\varSigma _n^F = \sigma ^2 \breve{\varSigma }_n^F\) and \(\breve{\mu }_n^F = \mu _n^F\) holds then for the prediction step we have
$$\begin{aligned} \mu _{n+1}^P&= A(h)\mu _n^F + \xi (h) = A(h)\breve{\mu }_n^F + \xi (h) = \breve{\mu }_{n+1}^P, \\ \varSigma _{n+1}^P&= A(h)\varSigma _n^F A^{\mathsf {T}}(h) + Q(h), \\&= \sigma ^2 \Big ( A(h)\breve{\varSigma }_n^F A^{\mathsf {T}}(h) + \breve{Q}(h) \Big ), \\&= \sigma ^2 \breve{\varSigma }_{n+1}^P, \end{aligned}$$
where we used the fact that \(Q(h) = \sigma ^2 \breve{Q}(h)\), which follows from \(L = \sigma \breve{L}\) and Eq. (8). Furthermore, recall that \(H_{n+1} = {\dot{C}} - \varLambda (t_{n+1}) C\), which for the update gives
$$\begin{aligned} S_{n+1}&= H_{n+1} \varSigma _{n+1}^P H_{n+1}^{\mathsf {T}}\\&= \sigma ^2 H_{n+1} \breve{\varSigma }_{n+1}^P H_{n+1}^{\mathsf {T}}\\&= \sigma ^2 \breve{S}_{n+1}. \\ K_{n+1}&= \varSigma _{n+1}^P H_{n+1}^{\mathsf {T}}S_{n+1}^{-1} \\&= \sigma ^2 \breve{\varSigma }_{n+1}^PH_{n+1}^{\mathsf {T}}[\sigma ^2 \breve{S}_{n+1}]^{-1} \\&= \breve{\varSigma }_{n+1}^PH_{n+1}^{\mathsf {T}}\breve{S}_{n+1}^{-1} \\&= \breve{K}_{n+1}. \\ {\hat{z}}_{n+1}&= H_{n+1} \mu _{n+1}^P - \zeta (t_n) \\&= H_{n+1} \breve{\mu }_{n+1}^P - \zeta (t_n) \\&= \breve{z}_{n+1}, \\ \mu _{n+1}^F&= \mu _{n+1}^P + K_{n+1}(z_{n+1} - {\hat{z}}_{n+1}) \\&= \breve{\mu }_{n+1}^P + \breve{K}_{n+1}(z_{n+1} - \breve{z}_{n+1}) \\&= \breve{\mu }_{n+1}^F. \\ \varSigma _{n+1}^F&= \varSigma _{n+1}^P - K_{n+1} S_{n+1} K_{n+1}^{\mathsf {T}}\\&= \sigma ^2 \Big ( \breve{\varSigma }_{n+1}^P - \breve{K}_{n+1} \breve{S}_{n+1} \breve{K}_{n+1}^{\mathsf {T}}\Big ) \\&= \sigma ^2\breve{\varSigma }_{n+1}^F. \end{aligned}$$
It thus follows by induction that \(\mu _n^F = \breve{\mu }_n^F\), \(\varSigma _n^F = \sigma ^2 \breve{\varSigma }_n^F\), \({\hat{z}}_n = \breve{z}_n\), and \(S_n = \sigma ^2 \breve{S}_n\) for \(n \ge 0 \). From Eq. (40), we have that the log-likelihood is given by
$$\begin{aligned} \log p(z_{1:N})&= \log \prod _{n=1}^N {\mathcal {N}}(z_n; {\hat{z}}_n,S_n) \\&= \log \prod _{n=1}^N {\mathcal {N}}(z_n; \breve{z}_n,\sigma ^2 \breve{S}_n) \\&= -\frac{Nd}{2}\log \sigma ^2 \\&\quad -\sum _{n=1}^N \frac{(z_n-\breve{z}_n)^{\mathsf {T}}\breve{S}_n^{-1} (z_n-\breve{z}_n)}{2\sigma ^2}. \end{aligned}$$
Taking the derivative of log-likelihood with respect to \(\sigma ^2\) and setting it to zero gives the following estimating equation
$$\begin{aligned} 0 = -\frac{Nd}{2\sigma ^2} + \frac{1}{2(\sigma ^2)^2}\sum _{n=1}^N(z_n-\breve{z}_n)^{\mathsf {T}}\breve{S}_n^{-1} (z_n-\breve{z}_n), \end{aligned}$$
which has the following solution
$$\begin{aligned} \sigma ^2 = \frac{1}{Nd} \sum _{n=1}^N (z_n - \breve{z}_n)^{\mathsf {T}}\breve{S}_n^{-1} (z_n - \breve{z}_n). \end{aligned}$$
Tronarp, F., Kersting, H., Särkkä, S. et al. Probabilistic solutions to ordinary differential equations as nonlinear Bayesian filtering: a new perspective. Stat Comput 29, 1297–1315 (2019). https://doi.org/10.1007/s11222-019-09900-1
Issue Date: November 2019
Probabilistic numerics
Initial value problems
Nonlinear Bayesian filtering
|
CommonCrawl
|
GridPP5 Brunel University London Staff Grant
Lead Research Organisation: Brunel University
Department Name: Electronic and Electrical Engineering
This proposal, submitted in response to the 2014 invitation from STFC, aims to provide and operate a computing Grid for the exploitation of LHC data in the UK. The success of the current GridPP Collaboration will be built upon, and the UK's response to production of LHC data in the period April 2016 to March 2020 will be to ensure that there is a sustainable infrastructure providing "Distributed Computing for Particle Physics"
We propose to operate a distributed high throughput computational service as the main mechanism for delivering very large-scale computational resources to the UK particle physics community. This foundation will underpin the success and increase the discovery potential of UK physicists. We will operate a production-quality service, delivering robustness, scale and functionality. The proposal is fully integrated with international projects and we must exploit the opportunity to capitalise on the UK leadership already established in several areas. The Particle Physics distributed computing service will increasingly be integrated with national and international initiatives.
The project will be managed across various domains and will deliver the UK's commitment to the Worldwide LHC Computing Grid (WLCG) and ensure that worldwide activities directly benefit the UK.
By 2015, the UK Grid infrastructure will have expanded in size to 50,000 cores, with more than 35 PetaBytes of storage. This will enable the UK to exploit, in an internationally competitive way, the unique physics potential of the LHC.
GridPP's knowledge exchange activities fall into two main areas: firstly, those aimed at other academic disciplines, and secondly, business and industry. GridPP has a strong outreach programme to a public and academic audience, and intends to continue this in GridPP5. The Dissemination Officer will organise GridPP's presence at conferences and events. This includes booking and manning booths, arranging backdrops, material, posters, screens, and rotas where appropriate. Examples of events that we have attended include The British Science Festival, The Royal Society Summer Exhibition, the British Science Association Science Communication Conference and Meet The Scientist at the Museum of Science and Industry in Manchester.
GridPP has developed an extensive website that is central to project communications. The Dissemination Officer will be responsible for producing news items for the website and drafting GridPP press releases. We have had broad coverage from these in the past, including many national newspapers and online publications.
Additional activities will include producing GridPP material, such as leaflets, posters, t-shirts, bags and magic cubes. We have found these very valuable in raising GridPP's and LHC's profile at minimal cost. The Dissemination Officer will also promote outreach training for members of the collaboration, will identify GridPP staff who have specific expertise in this area and will arrange occasional GridPP events, such as the Tier-1 open day.
On KE, our initial work has proved that GridPP's technology can be of use across a range of disciplines and sectors, and we plan to continue this work during GridPP5. The objectives of this program will be to improve awareness of the technologies developed by GridPP and its partners in academia and industry, and hence facilitate the increase in use of these technologies within new areas.
Apr 16 - Sep 20
ST/N001273/1
Peter Robert Hobson
Paul Kyberd
Brunel University, United Kingdom (Lead Research Organisation)
Rutherford Appleton Laboratory, Oxford (Collaboration)
University of Bristol, United Kingdom (Collaboration)
European Organization for Nuclear Research (CERN) (Collaboration)
Peter Robert Hobson (Principal Investigator) http://orcid.org/0000-0002-5645-5253
Paul Kyberd (Principal Investigator)
|< < 1 2 3 4 5 6 7 8 9 10 > >|
Sirunyan A (2018) Search for a heavy resonance decaying to a pair of vector bosons in the lepton plus merged jet final state at s = 13 $$ \sqrt{s}=13 $$ TeV in Journal of High Energy Physics
Sirunyan A (2019) Search for Higgs and Z boson decays to J/? or Y pairs in the four-muon final state in proton-proton collisions at s = 13 TeV in Physics Letters B
Sirunyan A (2020) Dependence of inclusive jet production on the anti-kT distance parameter in pp collisions at $$ \sqrt{\mathrm{s}} $$ = 13 TeV in Journal of High Energy Physics
Sirunyan A (2020) Search for resonant pair production of Higgs bosons in the b b Z Z channel in proton-proton collisions at s = 13 TeV in Physical Review D
Sirunyan A (2018) Nuclear modification factor of D0 mesons in PbPb collisions at s NN = 5.02 TeV in Physics Letters B
Sirunyan A (2018) Search for R-parity violating supersymmetry in pp collisions at s = 13 TeV using b jets in a final state with a single lepton, many jets, and high sum of large-radius jet masses in Physics Letters B
Sirunyan A (2018) Search for beyond the standard model Higgs bosons decaying into a b b ¯ $$ \mathrm{b}\overline{\mathrm{b}} $$ pair in pp collisions at s = 13 $$ \sqrt{s}=13 $$ TeV in Journal of High Energy Physics
Sirunyan A (2020) Search for supersymmetry in proton-proton collisions at $$ \sqrt{s} $$ = 13 TeV in events with high-momentum Z bosons and missing transverse momentum in Journal of High Energy Physics
Sirunyan A (2018) Search for resonances in the mass spectrum of muon pairs produced in association with b quark jets in proton-proton collisions at s = 8 $$ \sqrt{s}=8 $$ and 13 TeV in Journal of High Energy Physics
Sirunyan A (2019) Measurement of differential cross sections for Z boson pair production in association with jets at s = 8 and 13 TeV in Physics Letters B
Sirunyan A (2018) Event shape variables measured using multijet final states in proton-proton collisions at s=13$$ \sqrt{s}=13 $$ TeV in Journal of High Energy Physics
Sirunyan A (2020) Measurements with silicon photomultipliers of dose-rate effects in the radiation damage of plastic scintillator tiles in the CMS hadron endcap calorimeter in Journal of Instrumentation
Sirunyan A (2019) Measurement of electroweak WZ boson production and search for new physics in WZ + two jets events in pp collisions at s = 13 TeV in Physics Letters B
Sirunyan A (2020) Strange hadron production in pp and p Pb collisions at s N N = 5.02 TeV in Physical Review C
Sirunyan A (2019) Search for low-mass resonances decaying into bottom quark-antiquark pairs in proton-proton collisions at s = 13 TeV in Physical Review D
Sirunyan A (2018) Constraints on models of scalar and vector leptoquarks decaying to a quark and a neutrino at s = 13 TeV in Physical Review D
Sirunyan A (2020) A measurement of the Higgs boson mass in the diphoton decay channel in Physics Letters B
Sirunyan A (2018) Search for the decay of a Higgs boson in the ll? channel in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV in Journal of High Energy Physics
Sirunyan A (2019) Search for pair-produced three-jet resonances in proton-proton collisions at s = 13 TeV in Physical Review D
Sirunyan A (2019) Search for long-lived particles decaying into displaced jets in proton-proton collisions at s = 13 TeV in Physical Review D
Sirunyan A (2017) Search for massive resonances decaying into WW, WZ or ZZ bosons in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV in Journal of High Energy Physics
Sirunyan A (2020) Pileup mitigation at CMS in 13 TeV data in Journal of Instrumentation
Sirunyan A (2018) Search for an exotic decay of the Higgs boson to a pair of light pseudoscalars in the final state of two muons and two t leptons in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV in Journal of High Energy Physics
Sirunyan A (2020) Search for direct top squark pair production in events with one lepton, jets, and missing transverse momentum at 13 TeV with the CMS experiment in Journal of High Energy Physics
Sirunyan A (2018) Search for supersymmetry in events with a t lepton pair and missing transverse momentum in proton-proton collisions at s = 13 $$ \sqrt{s}=13 $$ TeV in Journal of High Energy Physics
Description GridPP6 Brunel Staff Grant
Funding ID ST/T001291/1
Organisation Science and Technologies Facilities Council (STFC)
Description CMS
Organisation European Organization for Nuclear Research (CERN)
Department Compact Muon Solenoid (CMS)
PI Contribution Construction, comissioning and operation of the CMS experiment. Data analysis in top-quark physics studies. Provision (via GridPP London Tier-2) of computing resources.
Collaborator Contribution Data acquistion, computing resources (Tier 0), co-authorship of publications, access to data, scientific leadership and support
Impact Over 200 refereed journal publications in experimental particle physics. Along with LHC data analysed by the ATLAS collaboration CMS determined the existence of the Higgs boson which was the subject of the 2013 Nobel Prize in Physics. Several STFC funded doctoral students have been trained in data analysis, computer programming and large-scale distributed Grid computing techniques.
Department Department of Physics
Organisation Rutherford Appleton Laboratory
Department Particle Physics Department
Organisation University of Bristol
Department School of Physics
|
CommonCrawl
|
\begin{definition}[Definition:Southern Hemisphere]
The '''southern hemisphere''' of Earth is the hemisphere between the equator and the South Pole.
Points in the southern hemisphere have latitude between $0 \degrees \, \mathrm S$ (the equator itself) and $90 \degrees \, \mathrm S$ (the South Pole).
\end{definition}
|
ProofWiki
|
IngleseTedesco
Architettura e designArtiBusiness e EconomiaChimicaChimica industrialeFarmaciaFilosofiaFisicaGeoscienzeIngegneriaInteresse generaleLeggeLetteraturaLinguistica e semioticaMatematicaMedicinaMusicaScienze bibliotecarie e dell'informazione, studi libraryScienze dei materialiScienze della vitaScienze informaticheScienze socialiSport e tempo liberoStoriaStudi classici e del Vicino Oriente anticoStudi culturaliStudi ebraiciTeologia e religione
Volume 3 (2018): Edizione 3 (August 2018)
Journal of Data and Information Science
Dettagli della rivista
Prima pubblicazione
Frequenza di pubblicazione
4 volte all'anno
Accesso libero
Factors Influencing Cities' Publishing Efficiency
Csomós György
Pubblicato online: 22 Nov 2018
Volume & Edizione: Volume 3 (2018) - Edizione 3 (August 2018)
Pagine: 43 - 80
Ricevuto: 05 Sep 2018
Accettato: 28 Oct 2018
DOI: https://doi.org/10.2478/jdis-2018-0014 © 2018 Csomós György, published by SciendoThis work is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License.
Condivide
Anteprima PDF
Figure e tabelle
Dettagli rivista e numero
Both the total publication output of China (Andersson et al. 2014; Grossetti et al. 2014; Morrison 2014; Zhou et al. 2009a) and its publication output in specific research areas (Kumar and Garg 2005; Lu and Wolfram 2010; Zou and Laubichler 2017; Zhou et al. 2009b) have significantly increased in the past decades. The growth rate of China's publication output is quite extreme; however, India (Gupta et al. 2011), Iran (Moin et al. 2005), Brazil (de Almeida and Guimarães 2013; Leta et al. 2006), South Korea (Kim et al. 2012), and Taiwan (Miyairi and Chang 2012) have also recently witnessed significant growth in their total publication output. At the same time, the global share of the publication output of the most developed countries (e.g., the United States, Canada, the Western European countries, Japan, and Australia) has been slowly decreasing. Naturally, the United States still has the highest publication output in the world (Leydesdorff and Wagner 2009; Nature Index 2016), but it can easily be predicted that, due to China's robust growth in the production of science, the global hegemony of the United States will soon cease.
Some cities in the world have long been considered as an outstanding locus of the production of science (Matthiessen and Schwarz 1999; Van Noorden 2010), and for some decades, an increasing number of cities have been involved in that process (Grossetti et al. 2014; Maisonobe et al. 2017). However, cities' contribution to the global publication output has been changing over time. Before the rise of Chinese cities, most global output was primarily produced by Northern American cities (e.g., New York, Boston, and Los Angeles), Western European cities (e.g., London, Paris, and Rome), and Japanese cities (e.g., Tokyo, Kyoto, and Osaka). Currently, Beijing is producing the highest publication output in the world (Csomós 2018; Van Noorden 2010). Furthermore, some cities in emerging countries have been positioning themselves as major actors in the production of science. For example, the publication output of Seoul (South Korea), Tehran (Iran), and São Paulo (Brazil) has also increased significantly.
The question is whether the total publication output clearly represents the scientific performance of a city. Can we find another method to measure the scientific performance of a city, a method that is not based on total (or any kind of) output? Does the geographical pattern of the global production of science change if we focus on quality rather than quantity regarding cities' publication output?
According to Van Noorden (2010), there are some alternatives to express the quality of a city's scientific performance, for example, measuring the 'average number of citations that a research paper from a city attracts' or measuring the total number of Nature and Science articles published by researchers affiliated with that city. Recent studies recommend that, to measure the quality of cities' publication output, the focus should be on the citation impact of the articles published in those cities. According to Bornmann and Leydesdorff (2011), Bornmann and Waltman (2011), Bornmann et al. (2011), Bornmann and Leydesdorff (2012), and Leydesdorff et al. (2014) as centres of excellence, cities can be assessed by counting the number of excellent papers (i.e., the top 1% most highly cited papers) produced in a city. These studies suggest that, based on the quality of the publication output, cities located in the most developed countries (i.e., the United States, Canada, the Western European countries, Japan, and Australia) are still in top positions.
It is, however, assumed that the higher a city's total publication output is, the more likely it is that the output of highly cited papers will also be high (e.g., currently Beijing produces the greatest number of highly cited papers in the world). This context suggests that, instead of focusing on cities' total publication output or the output of highly cited papers, we should focus on cities' publishing efficiency (i.e., the ratio of highly cited papers to all papers).
Why is it important to measure a city's publishing efficiency? It can be assumed that the higher the ratio of the number of highly cited papers to all articles produced in a city is, the more likely it is that researchers affiliated with that city conduct research resulting in new scientific breakthroughs (Van Noorden's study also suggests this nexus). Thus, publishing efficiency shows how successful a city is at the production of science. In 2015, 2.28 percent of the world's GDP was spent on research and development (R&D) but of course this value varied country to country. In some countries, a higher proportion of the GDP was spent on R&D (e.g., Israel, Japan, and Sweden spent more than three percent of their GDP on R&D), while most countries' R&D expenditures remain under the world average (e.g., the United Kingdom spent less than two percent of their GDP on R&D). Publishing efficiency is a measure that informs governments on how effectively the R&D expenditures have been used (for example, the mean publishing efficiency of UK cities is almost twice as much as that of Japanese cities, while Japan has a much higher R&D expenditure). Furthermore, because publishing efficiency is measured on the city level, it allows governments to introduce more effective regional development policies.
There are many factors influencing cities' publishing efficiency, some of which are city specific and some of which are general. Most of the city-specific factors are related to human factors (for example, how prolific a researcher or a team of researchers is), which, due to their nature, vary city to city. However, based on the general factors, typical geographical patterns can be revealed. In this paper, I aim to measure cities' publishing efficiency worldwide and present the most significant general factors that might influence their publishing efficiency.
The structure of the paper is as follows. In Section 2, I present the data collection process and the methodology. Section 3 is divided into two subsections. In the first subsection, I rank cities based on their publishing efficiency, and in the second subsection, the most significant general factors are presented. Finally, in Section 4, I discuss the results and draw the conclusions.
Data and methodology
In the analysis, only cities that had at least 3,000 journal articles published in the period from 2014 to 2016 (i.e., at least 1,000 articles per year) are included. This criterion was met by 554 cities. Data of scientific publications were provided by the Clarivate Analytics' Web of Science database. Two constraints were implemented to improve the objectivity of the study: 1) Only journal articles were selected for the analysis, and 2) journals should be included in the Science Citation Index Expanded (SCI-EXPANDED), the Social Sciences Citation Index (SSCI), and the Arts & Humanities Citation Index (A&HCI) databases.
The reason for the first constraint is that journal articles are generally considered the most prestigious of scientific publications since they are 'the basic means of communicating new scientific knowledge' (Braun et al. 1989: 325). Therefore, I excluded all other types of publications indicated by the Web of Science (e.g., meeting abstracts, book reviews, editorial materials, reviews, proceedings paper, etc.). It should be noted, however, that certain scientific fields conference proceedings (e.g., in computer sciences) and books (e.g., in social sciences and humanities) are also important publishing channels for researchers. However, two-thirds of the documents indexed in the Web of Science are journal articles, therefore the results are only slightly biased towards health sciences and natural sciences.
The reason for the second constraint is that, in 2015, Clarivate Analytics launched a new database in the Web of Science, the Emerging Sources Citation Index (ESCI), which includes journals of regional importance from emerging scientific fields but that are not yet listed in the Journal Citation Report (i.e., they do not have an impact factor).
The publishing efficiency of a given city (x) in the period from 2014 to 2016 (y) is obtained by dividing the number of the highly cited articles
The definition of highly cited papers in the Web of Science is as follows: highly cited papers received enough citations as of July/August 2017 to place them in the top 1% of their academic fields based on a highly cited threshold for the field and publication year.
by the number of all articles produced by authors affiliated with that city (the value is multiplied by 100 to show a percentage). The formula is as follows:
PublishingEfficiencyx,y=∑HCAx,y∑Ax,y∗100,$$Publishing\; Efficienc{{y}_{x,y}}=\frac{\sum{HC{{A}_{x,y}}}}{\sum{{{A}_{x,y}}}}*100,$$
where HCA is the number of highly cited articles
In the analysis, I focused on the document type 'articles' only; therefore, I narrowed the category of highly cited 'papers' to the category of highly cited 'articles'.
indicated by the Web of Science and A denotes all articles indexed by the Web of Science.
The Web of Science presents the name of cities in the addresses reported by the authors of publications. Naturally, one can suggest that it is difficult, if not impossible, to compare scientometric data of cities with very different sizes and populations. For example, Beijing, the Chinese capital, with almost 22 million inhabitants and an area of 16,000 km2 is obviously not on the same tier as Guilford, Surrey, a midsized English town with nearly 137,000 inhabitants. The total publication output of Beijing, produced in the period from 2014 to 2016, exceeded that of Guilford by 56 times, and the difference was the same in terms of the number of highly cited articles. However, the publishing efficiency of Beijing and that of Guilford is equal (1.317) since publishing efficiency is calculated as the quotient of the number of highly cited articles and the number of all articles. That is, publishing efficiency is a relative value, and the method it is calculated by makes the size of the city in terms of area or population irrelevant. It should be added that the size of a city might influence its publishing efficiency: a small city or town being home to a top-ranked university can produce higher efficiency than a metropolis being the location of many universities and research institutions. Acknowledging this limitation, however, the general factors influencing cities' publishing efficiencies can be well defined.
It should be noted that the period of data collection spanned from 26/09/2017 to 26/10/2017. The Web of Science has been continuously indexing articles in its database from previous years, especially from 2016. For this reason, in 2017, the number of articles published in 2016 (and before) has been increasing, just like the number of the highly cited articles. A repeated data collection would experience minor differences in the obtained data; however, neither the publishing efficiency nor the rank of cities would considerably change.
Relationship between cities' publication output and publishing efficiency
In the past decades, the publication output of China has radically increased, and the growth rate has exceeded that of any other countries in the world. Furthermore, the publication outputs of some emerging countries, such as South Korea, Taiwan, India, and Iran, have also significantly increased (Csomós 2018; Grossetti et al. 2014; Maisonobe et al. 2017). Naturally, the annual outputs of the United States, Canada, the Western European countries, and Japan have also increased, but they have witnessed a much smaller growth rate than the emerging countries. Therefore, the share of the most developed countries in the production of science has been decreasing for decades (Leydesdorff and Wagner 2009).
Some cities in the world have long been considered as an outstanding locus of the production of science (Matthiessen and Schwarz 1999; Van Noorden 2010), and the growth rate of these cities' publication output is much higher than that of the countries in which they are located. By the beginning of the 2010s, the annual publication output of Beijing surpassed that of any other city in the world (Table 1). In the period from 2014 to 2016, it produced a greater number of scientific publications than Japan. Furthermore, Seoul, Tehran, and São Paulo have also experienced a significant increase in their publication output, and due to this development, their position in the ranking has approached that of Tokyo, Paris, New York, and Boston.
Top 50 cities producing the greatest number of articles between 2014 and 2016.
Total number of articles (2014-2016)
1 China Beijing 201260
2 China Shanghai 98227
3 England London 92453
4 South Korea Seoul 86447
5 Japan Tokyo 77440
6 France Paris 75033
7 China Nanjing 70320
8 USA New York, NY 68577
9 USA Boston, MA 63789
10 China Guangzhou 51922
11 China Wuhan 50343
12 Russia Moscow 47871
13 Spain Madrid 47061
14 Iran Tehran 46173
15 China Xi'an 44052
16 Spain Barcelona 40393
17 Brazil São Paulo 39916
18 USA Cambridge, MA 39121
19 China Hong Kong 39032
20 China Hangzhou 39029
21 USA Los Angeles, CA 38740
22 Canada Toronto, ON 38497
23 Australia Sydney, NSW 37676
24 USA Chicago, IL 37560
25 Singapore Singapore 37523
26 USA Baltimore, MD 36528
27 Germany Berlin 36509
28 USA Philadelphia, PA 36117
29 China Chengdu 36032
30 USA Houston, TX 33869
31 USA Atlanta, GA 32564
32 Canada Montreal, PQ 31820
33 China Tianjin 31764
34 England Oxford 31605
35 Germany Munich 30886
36 USA Seattle, WA 30779
37 Netherlands Amsterdam 30498
38 USA Washington, DC 29986
39 Switzerland Zürich 29242
40 Australia Melbourne, VIC 29198
41 Sweden Stockholm 28599
42 England Cambridge 27907
43 China Changsha 27442
44 USA Ann Arbor, MI 27322
45 Japan Osaka 26594
46 China Jinan 26557
47 China Harbin 26386
48 Denmark Copenhagen 25538
49 Italy Rome 25378
50 China Hefei 24911
However, many researchers wonder about the quality of publications produced in Brazilian, Chinese, Indian, Iranian, and even South Korean cities, which is also reflected in their low citation impact (Andersson et al. 2014; Maisonobe et al. 2017; Xie et al. 2014; Van Noorden 2010; Zhou et al. 2009a). In the period from 2014 to 2016, the greatest number of highly cited articles were produced in Beijing (see Table 2), which is not surprising, if we consider the extremely high total publication
Top 50 cities producing the greatest number of highly cities articles between 2014 and 2016.
Number of highly cited articles (2014-2016)
1 China Beijing 2650
2 USA Boston, MA 2387
3 England London 2337
4 USA New York, NY 2237
5 USA Cambridge, MA 1827
6 France Paris 1601
7 China Shanghai 1208
8 USA Seattle, WA 1191
9 USA Los Angeles, CA 1142
10 England Oxford 1083
11 USA Stanford, CA 1058
12 USA Philadelphia, PA 1050
13 USA Baltimore, MD 1047
14 Canada Toronto, ON 1024
15 USA Chicago, IL 991
16 USA Atlanta, GA 971
17 USA Houston, TX 964
18 Spain Barcelona 920
19 USA Berkeley, CA 911
20 USA San Francisco, CA 910
21 England Cambridge 885
22 Singapore Singapore 871
23 China Nanjing 866
24 USA Bethesda, MD 825
25 Netherlands Amsterdam 806
26 Spain Madrid 772
27 USA Ann Arbor, MI 765
28 Germany Munich 764
29 Japan Tokyo 734
30 Switzerland Zürich 730
31 Denmark Copenhagen 729
32 Australia Sydney, NSW 723
33 China Hong Kong 720
34 Sweden Stockholm 708
35 South Korea Seoul 698
36 Germany Berlin 678
37 USA Washington, DC 674
38 USA Durham, NC 660
39 USA New Haven, CT 659
40 Canada Montreal, PQ 656
41 Australia Melbourne, VIC 643
42 China Wuhan 637
43 Germany Heidelberg 627
44 Canada Vancouver, BC 615
45 USA Pittsburgh, PA 588
46 USA Princeton, NJ 587
47 China Guangzhou 552
48 Switzerland Geneva 534
49 Saudi Arabia Jeddah 532
50 USA St. Louis, MO 511
output of Beijing in terms of the number of articles. Table 2 shows that the difference between the output of Beijing and Boston in terms of the number of highly cited articles is very small, while the total output of Beijing is three times greater than that of Boston (see Table 1). Comparing the rankings in Tables 1 and 2, the positions of some top-ranked cities in terms of total output (e.g., Tokyo and Seoul) have dropped in the ranking of cities producing the greatest number of highly cited articles. In addition, such emerging cities, such as Tehran and São Paulo, which both produced a substantial number of articles between 2014 and 2016, have disappeared from the ranking of the top 50 cities with the greatest number of highly cited articles.
A different geographical pattern will emerge if we focus on measuring cities' publishing efficiency (Figure 1). The mean publishing efficiency of the 554 cities included in the analysis is 1.818, which means that an average of 1.818% of all articles published in these cities in the period from 2014 to 2016 received enough citations to belong to the top 1% of highly cited articles. However, there are significant geographic differences behind the mean value. Figure 1 shows that the publishing efficiency of most Chinese, Japanese, and South Korean cities (many of which have high publication output in terms of the number of articles) is quite low, while the publishing efficiency of most Northern American and Western European cities is considerably higher. This information is not novel since, directly or indirectly, it has also been described by Van Noorden (2010), Bornmann and Waltman (2011), and Leydesdorff et al. (2014).
Geographic visualisation of cities' publishing efficiency.
However, a more fundamental question is whether there are factors influencing cities' publishing efficiency. Are there any general factors producing high publishing efficiency? Can we find general factors characterising cities having low publishing efficiency? Why is the publishing efficiency of Villejuif (France), Menlo Park, California (United States), or Jeddah (Saudi Arabia) high, and what are the reasons behind the low publishing efficiency of Tehran (Iran), Shenyang (China), and Niigata (Japan)? To answer these questions, we should explore and compare the general factors characterising the most efficient and least efficient cities.
The mean publishing efficiency of the top 100 most efficient cities is 3.179, while that of the bottom 100 least efficient cities is 0.621. In the top 100 cities, in the period from 2014 to 2016, a mean of 13,830 articles per city was produced, of which a mean of 444.71 articles per city received enough citations to belong to the top 1% of highly cited articles. In the same period, in the bottom 100 cities, a mean of 8,885 articles per city was produced, of which only a mean of 60.44 articles per city received enough citations to belong to the top 1% of highly cited articles. That is, the total output in terms of the number of articles of the top 100 most efficient cities is only 1.5 times greater than that of the bottom 100 least efficient cities. In contrast to the results above, there is a difference of more than 7.4 times between the number of highly cited articles produced in the top 100 cities and those produced in the bottom 100 cities. When exploring the general factors influencing cities' publishing efficiency, I will focus on presenting the differences between the top 100 most efficient cities and the bottom 100 least efficient cities.
The general factors examined in this paper are as follows: the linguistic environment of cities, cities' economic development level (both derived from country-level data), the location of excellent organisations, cities' international collaboration patterns, and their scientific field profile. The full list of the top 100 most efficient cities is available in Appendix 1, and the list of the bottom 100 least efficient cities is in Appendix 2.
Exploring factors influencing cities' publishing efficiency
Before exploring and evaluating the general factors influencing cities' publishing efficiency, it is necessary to present the geographical location of cities included in the analysis. The geographical location of a given city does not directly influence its publishing efficiency but allows us to draw indirect conclusions.
Most cities producing high publication output in terms of the number of articles (i.e., at least 3,000 articles in the period from 2014 to 2016) are in three geographical regions in the world: Europe, Asia, and Northern America (Table 3). The aggregate proportion of cities from other regions i.e., Africa, Latin America, and Australia/New Zealand) does not reach 9%. Not just the output but also the mean publishing efficiency of cities differs from each other depending on where they are located. Northern American cities produce the highest publishing efficiency, which is almost one-third greater than that of the European cities ranked second. However, if we divide Europe, the most complex region (there are 29 European countries in the analysis), into sub-regions, we obtain a more realistic picture. The mean publishing efficiencies of the Northern European
Northern Europe includes the countries of the United Kingdom as defined by the United Nations Statistics Division in its geoscheme.
and the Western European cities are much higher than that of the Southern European and Eastern European cities, and while the publishing efficiencies of the former groups approach the efficiencies of the Northern American cities, those of the Eastern European cities are rather close to the efficiencies of the Latin American cities. In Asia, significant differences emerge as well. The mean publishing efficiencies of cities in Southern Asia and Eastern Asia are under the mean efficiencies of Western Asian cities. Furthermore, cities located in the former two Asian sub-regions produce the lowest mean publishing efficiencies in the world.
Number of cities and their mean publishing efficiencies by region and sub-region*.
Regions/Sub-regions
Number of cities
Percentage in the dataset
Cities' mean efficiency publishing
Africa 11 1.99 1.306
Asia 131 23.65 1.009
Eastern Asia 88 15.88 0.950
Southern Asia 24 4.33 0.876
Western Asia 14 2.53 1.521
Europe 230 41.52 1.948
Eastern Europe 25 4.51 0.989
Northern Europe 60 10.83 2.260
Southern Europe 54 9.75 1.673
Western Europe 91 16.43 2.168
Latin America 21 3.79 0.952
Northern America 145 26.17 2.497
Canada 18 3.25 1.970
USA 127 22.92 2.572
Australia/New Zealand 16 2.89 1.918
World 554 1.818
*Regions and sub-regions are defined by the United Nations Statistics Division in its geoscheme.
Figure 2 shows the geographical location of the top 100 most efficient cities. Most of the top 100 cities are in two major regions: Northern America (primarily in the United States) and Europe (primarily in Northern Europe and Western Europe). In this group, only three cities are outside the above regions: two of them are in Southern Africa (more precisely in South Africa), and two of them can be found in Western Asia (Saudi Arabia and Israel).
Geographical location of the top 100 most efficient cities.
The bottom 100 least efficient cities are primarily in three major regions in the world: Asia (primarily in Eastern Asia and Southern Asia), Europe (primarily in Eastern Europe), and Latin America. There is no Northern American city among the least efficient cities, and only two cities from Northern Europe and Western Europe belong to this group. Compared to the number of cities from other regions in the world, the number of African cities (6 out of 100 cities) is insignificant in this group; however, 55% of all African cities in the dataset of 554 cities belong to the bottom 100 least efficient cities.
The geographical location of the top 100 most efficient cities and the bottom 100 least efficient cities is indicative information but allows us to deduce some of the general factors influencing cities' publishing efficiency. One of the most crucial factors is related to linguistic features, more precisely to the dominance of the English language.
The linguistic environment of cities as a factor influencing cities' publishing efficiency
It is a generally accepted fact that English has acquired an almost exclusive status as the international language of scientific communication (i.e., the neutral "lingua franca"), leaving little space for other languages in science (Björkman 2011; López-Navarro et al. 2015; Tardy 2004; van Weijen 2012). Although the most important indexing and abstracting databases (i.e., the Web of Science and Scopus) have been including an increasing number of non-English language journals, English language journals are still significantly overrepresented (Li et al. 2014; Mongeon and Paul-Hus 2016). According to Paasi (2005), 'Anglo-American journals dominate the publishing space in science', and the international journal publication space is 'particularly limited to the English-speaking countries'. Furthermore, as Braun and Dióspatonyi (2005), Braun et al. (2007), and Leydesdorff and Wagner (2009) asserted in terms of gatekeepers like editors-in-chief and editorial board member positions, the dominance of the United States is still unchallenged. Considering the above facts, the English language is assumed to be one of the main factors that influence cities' publishing efficiency.
Geographical location of the bottom 100 least efficient cities.
In this paper, I classified cities according to the Anglosphere system introduced by Bennet (2007). In this system, the United States, the United Kingdom, Canada, Australia, New Zealand, and Ireland belong to the Anglosphere−Core. Countries in the Anglosphere−Middle sphere (e.g., Nigeria and South Africa) have several official languages, including English (which is the principal language of administration and commerce), but 'where the primary connections to the outside world are in English'. The Anglosphere−Outer sphere consists of English-using states of other civilisations, including India, Pakistan, Bangladesh, the Arab states formerly under British control (primarily in the Middle East), and the Islamic former colonies of Britain (e.g., Malaysia and African states).
A total of 230 cities (out of 554 cities included in the analysis) are in countries in the Anglosphere, from which 195 cities are in countries in the Anglosphere−Core. Countries outside the Anglosphere are home to 324 cities. The mean publishing efficiency of cities in countries in the Anglosphere is 2.271, while that of the rest of the cities is 1.497. That is, the mean publishing efficiency of cities in countries in the Anglosphere is greater than that of the rest of the cities by 50%. If we focus on the mean publishing efficiency of cities located in countries in the Anglosphere− Core, it increases to 2.439.
As for the top 100 most efficient cities, 73% of them are in countries in the Anglosphere, and 70% of them can be found in countries in the Anglosphere–Core. The mean publishing efficiency of cities belonging to the latter group is 3.235. In contrast, 85% of the bottom 100 least efficient cities are in countries outside the Anglosphere, and 99% of them are in countries outside the Anglosphere−Core. Loughborough (England), having a publishing efficiency of 0.868, is the only city in the group of the bottom 100 cities that can be found in the Anglosphere–Core.
In conclusion, the publishing efficiency of cities located in countries in the Anglosphere (especially in the Anglosphere−Core) is much higher than that of any other cities located in countries outside the Anglosphere. That is, English is not only the international language of scientific communication but also the most fundamental factor influencing cities' publishing efficiency.
Economic development level of cities as a factor influencing publishing efficiency
Some researchers have observed linear or exponential correlation between scientometric indicators (e.g., the number of publications) and economic development indicators (e.g., GDP per capita or income per capita) (de Solla Price 1978; Kealey 1996; King 2004), while others assert that the correlation between these different sets of indicators is far from clear (Lee at al. 2011; Meo et al. 2013; Vinkler 2008; Vinkler 2010). It is, however, more commonly accepted that the higher the GDP per capita or the income level of a country is, the more likely it is that a greater number of publications will be produced in that country. The question is whether there is a relationship between cities' publishing efficiency (as a scientometric indicator) and cities' per capita income level (derived from country-level data).
The classification of countries (and cities) by income level is based upon data obtained from the World Bank Country and Lending Groups database
World Bank Country and Lending Groups https://datahelpdesk.worldbank.org/knowledgebase/articles/906519-world-bank-country-and-lending-groups
. In this database, countries are classified into four income-level groups: low-income countries (GNI per capita of $1,005 or less in 2016), lower middle-income countries (GNI per capita between $1,006 and $3,955), upper middle-income countries (GNI per capita between $3,956 and $12,235), and high-income countries (GNI per capita of $12,236 or more).
Results show that 434 out of 554 cities included in the analysis are in high-income countries, 93 of them are in upper middle-income countries, and only 27 cities can be found in lower middle-income countries. None of the cities are in low-income countries. That is, most cities producing high publication output in terms of the number of articles (i.e., at least 3,000 articles in the period from 2014 to 2016) are in high-income countries. The mean publishing efficiency of cities from high-income countries is 2.057, that of cities located in upper middle-income countries is 0.997, and the mean publishing efficiency of cities from lower middle-income countries is only 0.881. There is a difference of more than double between the mean publishing efficiency of cities located in high-income countries and that of cities located in upper middle-income countries. The difference between the mean publishing efficiency of cities in upper middle-income countries and that of cities in lower middle-income countries seems to be insignificant.
As for the top 100 most efficient cities, 98% of them are in high-income countries, and only 2% of them can be found in upper middle-income countries. As compared to the quasi-homogeneous group of the top 100 cities, the bottom 100 least efficient cities show a more complex picture; 18% of them are in lower middle-income countries, 46% of them are in upper middle-income countries, but 36% of the bottom 100 least efficient cities are in high-income countries. Based on former studies available in the literature, this latter result might not have been expected; therefore, it requires more explanation.
As was mentioned above, most of the top 100 cities were in Northern America (primarily in the United States) and Europe (primarily in Northern European and Western European countries). Almost all countries in these regions are high-income countries. Contrary to the most efficient cities, none of the least efficient cities are in Northern America. Furthermore, only 17% of the bottom 100 cities are in European countries; except for five cities, all of them are in Eastern European countries (including Russia). Results show that 11 out of the 17 least efficient European cities are in high-income countries, and six of them are in Poland. Figure 4 illustrates that many cities producing low publishing efficiency are in Eastern Asian high-income countries. Half of these cities are in South Korea (11 cities), and another half are in Japan (12 cities); i.e., in countries that belong to the most developed countries in the world in terms of income level. One might suggest that if South Korea and Japan are high-income countries, cities located in South Korea and Japan should produce high publishing efficiency. One reason for this discrepancy may be that both Korean and Japanese languages are considered language isolates (Campbell, 2010), and it is well studied how problematic it is for Japanese people to acquire sufficient communicative skills in English (even if they are researchers) (see, for example, Butler and Iino, 2005, Iwai, 2008). In contrast, for example, one survey shows that the Dutch have the world's best non-native English skills (see, EF EPI English Proficiency Index, https://www.ef.co.hu/epi/) Therefore, it is not surprising that the collaboration intensity of both Japan and South Korea with the United States is lower (the proportion of co-authored papers indexed in the WoS during 2014–2016 was 10.6 and 13.8 percent for these countries, respectively) than that of European high-income countries, such as Germany and the Netherlands (the proportion of co-authored papers indexed in the WoS during 2014–2016 was 15.9 and 18.0 percent for these countries, respectively). It has however been determined that the higher the intensity of the collaboration between a country and the United States is, the more likely it is that co-authored papers will receive a higher number of citations (Pan et al., 2012; Sud and Thelwall, 2016), and have a greater chance to become highly cited papers.
Geographical location of the bottom 100 least efficient cities in terms of countries' income levels.
Loughborough (England) is the only city in the bottom 100 least efficient cities that is in a high-income country belonging to the Anglosphere−Core. Beer-Sheva (Israel), a city in the group of the bottom 100 least efficient cities, is also in a high-income country, but is in the Anglosphere−Outer sphere. In fact, many of the bottom 100 cities are in countries in the Anglosphere−Outer sphere, but all of them are in lower middle-income countries, primarily in Southeast Asia (11 cities are in India, and one is in Pakistan).
East Asia is home to 46% of the bottom 100 least efficient cities. Beside Japan and South Korea, most of these cities are in China. While none of the East Asian countries included in the analysis belong to the group of the low-income or lower middle-income countries (Japan and South Korea are high-income countries, and China is an upper middle-income country), the publishing efficiency of the East Asian cities is rather low. Kawaguchi (Japan), the city producing the highest publishing efficiency in the region, is ranked only 138th. The facts above suggest that the economic development level of the cities is a key factor influencing publishing efficiency, which is reinforced by the fact that almost all cities in the group of the top 100 cities are in high-income countries, but it is not the most important factor.
The examination of factors like the dominance of the English language and cities' economic development level will bring us closer to understanding why cities' publishing efficiency differs from each other; however, we need deeper insight to obtain a precise picture of publishing efficiency. For example, country-level data allows us to understand why the publishing efficiency of Canadian and Chinese cities significantly differ from each other but does not help us to understand why the publishing efficiency of Kawaguchi is higher or why that of Niigata is lower than the mean publishing efficiency of Japanese cities. To examine cities' publishing efficiency in a more precise way, we need to focus on some general as well as more city-specific factors, like the location of excellent organisations, cities' international collaboration patterns, and the productivity of specific research areas.
For example, in Kawaguchi, most publications were produced by the Japan Science and Technology Agency, one of Japan's excellent scientific organisations; therefore, the publishing efficiency of Kawaguchi is considerably higher than that of other Japanese cities. That is, which cities in the world are home to excellent organisations (e.g., universities and governmental and international research institutions) should be examined. The question is whether these organisations are exclusively located in cities producing high publishing efficiency or whether some of them might be found in cities with low publishing efficiency.
Location of excellent organisations as a factor influencing cities' publishing efficiency
In the paper by Van Noorden (2010: 907) an important question arose: What is the reason Boston ranks top in several analyses of scientific quality? A brief answer was given by José Lobo, a statistician and economist who was affiliated with Arizona State University at Tempe: 'Take three or four of the best universities in the world, put them in a city with a seaport, and voilà!' Naturally, the question requires a more complex answer (as was later also explained by Van Noorden), but it calls attention to a key factor: the scientific performance of cities significantly depends on whether they are home to top-ranked universities.
Although many research institutions, hospitals, governmental organisations (e.g., ministries and departments), NGOs, and companies have a significant publication output (see, for example, Archambault and Larivière 2011; Csomós and Tóth 2016; Hicks 1995), scientific publications are primarily produced by universities all over the world. In recent years, university rankings have gained in popularity. The main goal of ranking and comparing universities in terms of scientific output (of which the publication output is a vital component) is to make the most prestigious universities visible worldwide. There are several different world university rankings available (e.g., CWTS Leiden Ranking, The Times Higher Education World University Rankings, QS World University Rankings, and Academic Ranking of World Universities – ARWU), which are all based upon different input data. However, each ranking attributes more or less significance to bibliometric indicators, such as the number of publications produced in a given university, the quality (citation impact) of scientific publications, or the number of articles published in top journals (see, for example, Docampo et al. 2015; Frenken et al. 2017; Piro and Sivertsen 2016; Shehatta and Mahmood 2016). Naturally, the methodologies of how university rankings are produced differ from each other; thus, university rankings are different in terms of top university rankings (Abramo and D'Angelo 2015; Lin et al. 2013).
From the point of view of this analysis, university rankings contain indicative information only. I chose to use the Academic Ranking of World Universities (ARWU) published annually by the Shanghai Ranking Consultancy because the importance of the Shanghai ranking has become recognized by both governments and universities; further, according to Docampo and Cram (2014), the 'ranking has become a major resource for exploring the characteristics and quality of academic institutions and university systems worldwide.' I examined whether there is a relationship between the location of top-ranked universities and cities' publishing efficiency. Top-ranked universities correspond to universities having been ranked among the top 100 universities on one of the ARWU lists of 2014, 2015, and 2016.
In the period from 2014 to 2016, the top 100 universities were in 95 cities, some of which were home to more than one top-ranked university (e.g., New York, London, Boston, Pittsburgh, Munich, Stockholm, and Zurich). The publishing efficiency of cities that were home to the top 100 universities averages 2.641, while that of the rest of the cities averages 1.648. That is, the mean publishing efficiency of cities that are home to the top 100 universities is higher than that of the rest of the cities by 60%. These results suggest that the location of top-ranked universities significantly influences cities' publishing efficiency. In other words, it seems to be a logical assumption that top-ranked universities are primarily located in the most efficient cities. Thus, we should examine which of the top 100 most efficient cities are home to top-ranked universities.
Figure 5 shows that it is not an exclusive privilege of the most efficient cities to be home to top-ranked universities. Only 43% of the top 100 universities are in the top 100 most efficient cities. Furthermore, there are many cities worldwide (including Chinese and Japanese cities), that do not belong to the top 100 most efficient cities; yet, they are home to top-ranked universities. In the group of the bottom 100 cities, Moscow (Russia) is the only city that is home to a top-ranked university.
Geographical location of cities that are home to the top 100 universities as ranked by ARWU.
The location of top-ranked universities is considered an important but not decisive factor influencing cities' publishing efficiency. Examining the ranking of the most efficient cities, there are two cities (Villejuif, France and Menlo Park, California, USA) topping the ranking that are not home to top-ranked universities as ranked by the ARWU.
It should be noted that ARWU is just one of the alternatives to rank universities. Naturally, other organisations produce different rankings with different universities in top positions. For example, out of the top 10 universities, only Harvard University and Stanford University appear in both the CWTS Leiden Ranking of 2017 and the ARWU list of 2017. Contrary to this example, the groups of the top 10 universities in the QS World University Rankings of 2017 and the ARWU list differ from each other by only three universities. In addition, there are many top-ranked universities that are not included in the group of top 100 universities on the ARWU list but are in cities with high publishing efficiency. For example, Rotterdam, the forty-second most efficient city in the world, is home to the Erasmus University Rotterdam, which ranked 101-150 (i.e., outside but close to the top 100 universities).
The question arises as to what kind of organisations (but not universities) are in cities like Villejuif, Menlo Park, California, Upton, New York (United States), Greenbelt, Maryland (United States), Didcot (England), etc., which produce very high publishing efficiency. The explanations are as follows.
Villejuif, the city with the highest publishing efficiency in the world, is home to the 'Institut Gustave Roussy', one of the world's leading cancer-research institutions and the premier oncology centre and teaching hospital in Europe. Although Villejuif is a city (commune) having 50 thousand inhabitants, it is a suburb of Paris, about seven kilometres from its centre.
Menlo Park is home to the SLAC National Accelerator Laboratory, a linear accelerator that is owned by the US Department of Energy and operated by the Stanford University. Currently, SLAC is the world's largest linear accelerator and is one of top research centres for accelerator physics. The city of Menlo Park, with a population of 32 thousand, is in the San Francisco Bay Area between San Francisco and San Jose (i.e., in one of the fastest growing regions in the world that is home to many innovative companies and top-ranked universities). Additionally, Didcot has 25 thousand inhabitants and is 16 km south of Oxford. Didcot is home to the Rutherford Appleton Laboratory, a world-renowned research centre for particle physics and space science.
Cities such as Villejuif, Menlo Park, and Didcot can be characterised the same way; they are smaller cities, towns, or villages located in metropolitan areas and are home to quasi-independent research institutions (e.g., national laboratories) generally operating under the umbrella of prestigious universities. Naturally, top research institutions are in large cities as well, but being surrounded by universities, their visibility in terms of publication output is much lower, even if they produce very high publishing efficiency. For example, the total publication output of Geneva (Switzerland) is produced by many organisations, including the European Organisation for Nuclear Research (CERN), the World Health Organization (WHO), and the University of Geneva. In the period from 2014 to 2016, almost 60% of Geneva's total publication output came from the University of Geneva, which has been ranked among the top 100 universities on the ARWU list, and which publishing efficiency is as high as 3.33. However, if we compare the publishing efficiency of the University of Geneva to that of the CERN (5.37) and the WHO (6.86), it seems rather low. The same pattern appears in large cities like New York, London, Paris, Los Angeles, and Tokyo.
In conclusion, a positive relationship can be detected between the location of top-ranked universities and cities' high publishing efficiency. However, it should be noted that publications, primarily in large cities, come from different types of organisations, many of which have lower publishing efficiency than universities. Thus, some cities that are home to top-ranked universities have not been included in the top 100 most efficient cities. Furthermore, there are several top-ranked cities that are not home to top-ranked universities (or any universities); yet, they produce a very high publishing efficiency.
International collaboration pattern as a factor influencing cities' publishing efficiency
In recent years, the number of publications produced by single authors has been decreasing, while the number of co-authored publications and number of co-authors in publications have been increasing rapidly (Abramo et al. 2017; Castelvecchi 2015; Uddin et al. 2012). Therefore, cities' international collaboration patterns have become more complex (i.e., authors affiliated with a given city have been collaborating with a growing number of co-authors affiliated with other cities in other countries). Naturally, cities' international collaboration patterns are influenced by many factors, including differences between the productivity of scientific disciplines (Larivière et al. 2006; Paul-Hus et al. 2017; Zhou et al. 2009b), the size of the national research system (Van Raan 1998), and linguistic features (Csomós 2018; Maisonobe et al. 2016). These facts might suggest that international collaboration patterns vary city to city worldwide, making it impossible to predict cities' publishing efficiency. However, this question remains to be answered.
In this section, I aim to examine whether cities with high publishing efficiency and cities with low publishing efficiency are characterised by specific international collaboration patterns. Data obtained from the Web of Science database allows us to reveal countries with which the co-authors are affiliated. For example, in the period from 2014 to 2016, 27,322 articles were produced in Ann Arbor, Michigan (United States), from which 765 received enough citations to belong to the top 1% highly cited articles. If we focus on the international collaboration pattern of all articles produced in Ann Arbor between 2014 and 2016, 8.76% of the articles were written with co-authors affiliated with China, 7.23% had co-authors affiliated with Canada, 7.13% had co-authors affiliated with England, 6.78% had co-authors affiliated with Germany, 5.16% had co-authors affiliated with France, and so on. That is, in the case of all articles, the top collaborator with Ann Arbor is China, and the second ranked collaborator is Canada, and so on.
However, if we focus on the international collaboration pattern of the highly cited articles, a different pattern will emerge. Most highly cited articles were written with co-authors affiliated with England (27.32%), with 25.49% from Canada, 23.53% from Germany, 20.26% from France, 17.39% from Italy, and so on. That is, in the case of highly cited articles, the top collaborator of Ann Arbor is England (replacing China as the top collaborator in all articles), and the second ranked collaborator is Canada, and so on.
I examine which countries are the top collaborators (i.e., collaborators ranked 1–5) in the case of all articles and in the case of highly cited articles produced in a given city in the period from 2014 to 2016. Furthermore, I compare the typical international collaboration patterns of the top 100 most efficient cities to that of the bottom 100 least efficient cities. My aim is to reveal whether there is a relationship between cities' international collaboration patterns and cities' publishing efficiencies and whether there is a difference between the typical collaboration patterns of the top 100 cities and the bottom 100 cities. When examining cities' international collaboration patterns, I implemented a geographical constraint. The group of the top 100 cities was divided into two sub-groups (i.e., the most efficient non-US cities and the most efficient US cities), and they were examined separately.
Table 4 shows the countries occupying the top 1–5 positions as collaborators in all articles and their frequency of occurrence in those positions. The top collaborator of the most efficient non-US cities (48 out of the top 100 cities) is the United States, whose frequency of occurrence in the top 1–5 positions is 100% (in the top position in 81.25% of the cases). This means that the United States has a very intense collaboration with every single city belonging to the group of the most efficient non-US cities. Germany is ranked second by collaborating with 87.50% of the most efficient non-US cities in one of the top 1–5 positions. As compared to that of the United States, the frequency of occurrence of Germany in the top position is only 8.33%. In the case of all articles, the top 1–5 collaborators of the most efficient non-US cities are the United States, Germany, England, France, and Italy. As top collaborators, other countries (like the Netherlands, Australia, Spain, etc.) are rather marginal, primarily appearing in the top 4–5 positions.
Top collaborators* in the case of all articles.
Top collaborators of the most efficient non-US cities occurring in the 1–5 positions
Frequency of occurrence in the 1–5 positions in percentage
Top collaborators of the most efficient US cities occurring in the 1–5 positions
Frequency of occurrence in the 1–5 position in percentage
Top collaborators of the least efficient cities occurring in the 1–5 positions
1 USA 100.00 Germany 98.11 USA 98.00
2 Germany 87.50 England 98.11 Germany 78.00
3 England 75.00 China 94.34 England 69.00
4 France 75.00 Canada 84.91 France 39.00
5 Italy 52.08 France 66.04 China 31.00
6 Netherlands 27.08 Australia 15.09 Australia 30.00
7 Australia 18.75 Italy 15.09 Japan 25.00
8 Spain 18.75 Japan 5.66 Canada 23.00
9 China 16.67 Netherlands 5.66 Italy 23.00
10 Scotland 8.33 South Korea 5.66 South Korea 18.00
* In this context, collaborators correspond to countries with which co-authors are affiliated.
In the case of all articles produced in the most efficient US cities (52 out of the top 100 cities), the most frequently occurring countries as collaborators in the top 1–5 positions are Germany, England, China, Canada, and France (Table 4). China, the top collaborator of the most efficient US cities, has surpassed England by almost 2%. The United States has had a traditionally close scientific relationship with Western European countries (especially the United Kingdom) and Canada (Adams 2013), but on the city level, China has recently been occupying a more significant position (Csomós 2018; Tian 2016). Naturally, the top international collaborator of most Chinese cities has been the United States for a long time (He 2009; Wang et al. 2013; Zhang and Guo 1997). If we merge the groups of the most efficient non-US cities and the most efficient US cities into a single group, it turns out that all the co-authors are affiliated with 21 countries occupying one of the top 1–5 positions.
Table 4 illustrates that the international collaboration patterns of the bottom 100 least efficient cities resemble a mixture of the international collaboration patterns of the most efficient non-US cities and US cities. The United States (in the top position in 85% of the cases), Germany, England, France, and China appear in the top 1–5 positions in most cases. However, two facts should be highlighted: 1) As for the international collaboration patterns of the bottom 100 cities, the frequency of occurrence of countries following the United States is much lower than in the case of the most efficient non-US cities. The mean frequency of occurrence of the top 1–5 collaborator countries in articles produced in the most efficient non-US cities is 77.92%. This value is 88.30% in articles produced in the most efficient US cities, but it reaches only 63% in the bottom 100 least efficient cities. 2) The least efficient cities collaborate with a greater number of countries (33) occupying one of the top 1–5 positions than the most efficient cities (21). Many of these countries (e.g., Saudi Arabia, Brazil, Iran, Russia, and South Korea) produce low publishing efficiency; thus, the collaboration has a negative effect on cities' publishing efficiency (i.e., these collaborations result in a smaller number of articles that receive enough citations to belong to the top 1% highly cited articles).
It is, however, more important to know which countries (more precisely the co-authors affiliated with that country) are the top collaborators of cities (more precisely the authors affiliated with that city) in highly cited articles. According to my hypothesis, countries as the top 1-5 collaborators of cities in highly cited articles differ from those occupying top positions in the total number of articles. The publishing efficiency of cities is heavily influenced by where the top collaborators are in the case of highly cited articles.
Table 5 shows that the collaboration pattern of the most efficient non-US cities in highly cited articles is almost the same as the collaboration pattern that emerged in the total number of articles; however, the relative weight of Germany, France, and England has increased. In the total number of articles, the mean frequency of occurrence of the top 1–5 collaborators was 77.92, while in highly cited articles, this value has increased to 79.17. In highly cited articles produced in the most efficient US cities, the frequency of occurrence of England is 100%, which means that England occupies one of the top 1–5 positions of every single city (in the top position in 57.69% of the cases). Germany has the same frequency of occurrence in highly cited articles than in the total number of articles, but the frequency of occurrence of Canada and especially that of France has significantly increased. China, the third most frequently occurring country in the total number of articles, has vanished from the group of the top collaborators in highly cited articles. This means that, although the total number of articles in US cities shows intense collaboration with China, the collaboration results in only a small number of highly cited articles. In highly cited articles, the mean frequency of occurrence of the most efficient US cities with the top 1–5 collaborators is 81.92%, which is a bit less than in the total number of articles.
Top collaborators* in the case of the highly cited articles.
1 USA 100.00 England 100.00 USA 98.00
2 Germany 89.58 Germany 98.08 Germany 72.00
3 France 79.17 France 88.46 England 70.00
4 England 77.08 Canada 86.54 France 43.00
5 Italy 50.00 Australia 36.54 Australia 34.00
6 Netherlands 22.92 Italy 34.62 China 29.00
7 Spain 18.75 China 21.15 Italy 29.00
8 Switzerland 16.67 Spain 9.62 Spain 26.00
9 Australia 14.58 Netherlands 9.62 Canada 25.00
10 Canada 12.50 Switzerland 7.69 Japan 13.00
Not surprisingly, in highly cited articles, the bottom 100 least efficient cities have a very intense collaboration with the United States. In 98 cities, the United States occupies one of the top 1–5 positions and is in the top position in 79% of the cases. The frequency of occurrence of countries following the United States is much lower than in the most efficient cities. The mean frequency of occurrence of the top 1–5 collaborators in highly cited papers produced in the least efficient cities is only 63.4%. In the top 1–5 positions, the bottom 100 least efficient cities collaborate with a total of 30 countries, while this value in the top 100 most efficient (non-US and US) cities is 16.
In the case of the highly cited articles, there are fundamental differences between the international collaboration patterns of the most efficient cities and the least efficient cities. Although both groups of cities have roughly the same top collaborators, the least efficient cities collaborate with a much greater number of countries than the most efficient cities. It seems that this difference significantly influences the publishing efficiency of cities.
In conclusion, if co-authors are primarily from countries of the United States, Germany, England, France, Canada, and Italy, which are leading countries in science, articles will have a greater chance to receive enough citations to belong to the top 1% highly cited articles.
The scientific field profiles of cities as a factor influencing publishing efficiency
Beside the factors detailed above, cities' publishing efficiency is significantly influenced by the productivity of scientific disciplines. The most productive disciplines vary city to city, and the productivity of different disciplines in terms of highly cited articles differs as well (Bornmann et al. 2011). In each city, the most productive disciplines will be revealed both in the case of all articles and in the case of highly cited articles.
For example, in the period from 2014 to 2016, authors from Ann Arbor, Michigan produced articles in 151 disciplines: 8.16% of the 27,322 articles were published in the discipline of physics, 7.41% in engineering, 6.43% in 'science, technology, and other topics' (as it is indicated in the WoS), 4.99% in chemistry, 4.91% in psychology, and so on. The greatest number of highly cited articles was produced in quite different disciplines; 15.11% of the 765 highly cited articles were written in 'science, technology, and other topics', 11.27% in general internal medicine, 9.35% in physics, 9.22% in oncology, 5.89% in astronomy and astrophysics, and so on.
To obtain a better understanding of why the publishing efficiency of the most efficient cities and that of the least efficient cities differ significantly, we need to reveal the characteristics of the most productive discipline in those cities. Table 6 shows that, in the case of the top 100 cities, the most productive discipline
The most productive scientific discipline corresponds to the one to which the largest number of articles belong in the Web of Science.
occurring in the top 1–5 positions is 'science, technology, and other topics'. In the Web of Science, articles published in multidisciplinary journals (e.g., Nature, Science, Proceedings of the National Academy of Sciences of the United States of America, and PlosONE) are classified into the discipline of 'science, technology, and other topics'. It is well-known that articles published in high-impact multidisciplinary journals become highly cited at a very great proportion. For example, 45.67% of all articles published between 2014 and 2016 in Nature and 40.44% of all articles published in the same period in Science have received enough citations to belong to the top 1% highly cited articles.
Most productive scientific disciplines in all articles.
The most productive scientific disciplines occurring in the top 1–5 positions in the most efficient cities
The most productive scientific disciplines occurring in the top 1–5 positions in the least efficient cities
1 Science, Technology, and Other Topics 84.00 Chemistry 99.00
2 Physics 63.00 Engineering 85.00
3 Neurosciences and Neurology 47.00 Physics 84.00
4 Chemistry 45.00 Materials Science 80.00
5 Engineering 41.00 Science, Technology, and Other 54.00
6 Astronomy and Astrophysics 38.00 Mathematics 15.00
7 Oncology 29.00 Environmental Sciences and 12.00
8 Environmental Sciences and Ecology 21.00 Oncology 12.00
9 Psychology 20.00 Pharmacology and Pharmacy 11.00
10 Materials Science 16.00 Agriculture 7.00
In general, articles published in the top 100 most efficient cities can be classified into two major scientific fields: natural sciences (e.g., physics, chemistry, and engineering) and health sciences (e.g., neurosciences and neurology, oncology, and psychology). Contrary to the top 100 cities, most articles produced in the bottom 100 least efficient cities can be classified into disciplines that are natural sciences, while the field of health sciences is almost absent. In the case of the least efficient cities, oncology is the most frequently occurring health science discipline with a frequency of occurrence of only 12% (i.e., it occurs in the top 1–5 positions in only 12% of the least efficient cities). In contrast to health sciences, natural sciences (e.g., chemistry, engineering, physics, and material science) produce a very high frequency of occurrence (Table 6). Chemistry is in the top 1–5 positions in almost every bottom 100 city, and it occupies the top position in 54% of the cases. This means that, in more than half of the least efficient cities, chemistry is the most productive research area.
In the case of the highly cited articles published in the top 100 most efficient cities, the discipline of 'science, technology, and other topics' is even more dominant; it is in the top 1–5 positions in 91% of all cities but occurs in the top position in only 20% of the cases. In 35% of the top 100 cities, general internal medicine occupies the top position but ranked second based on the aggregate frequency of occurrence (Table 7). In highly cited articles produced in the most efficient cities, both the number and frequency of occurrence of health disciplines are greater than in all articles. When examining all articles produced in the most efficient cities, general internal medicine is in the top 1–5 positions in only 5% of cases, but in the highly cited articles, this value increases to 69%. Furthermore, the frequency of occurrence of oncology and the discipline of the cardiovascular system and cardiology increased by more than 50%.
Most productive scientific disciplines in highly cited articles.
1 Science, Technology, and Other Topics 91 Science, Technology, and Other 74
2 Physics 69 Chemistry 66
3 General Internal Medicine 69 Physics 56
4 Astronomy and Astrophysics 54 Engineering 54
5 Oncology 42 General Internal Medicine 37
6 Chemistry 28 Materials Science 37
7 Cardiovascular System and Cardiology 21 Astronomy and Astrophysics 25
8 Biochemistry and Molecular Biology 18 Environmental Sciences and Ecology 20
9 Environmental Sciences and Ecology 15 Oncology 20
10 Neurosciences and Neurology 14 Mathematics 17
In highly cited articles produced in the bottom 100 least efficient cities, most of the dominant disciplines are in natural sciences. In the least efficient cities, the discipline of 'science, technology, and other topics' occupies the top position, but its frequency of occurrence is less than in the most efficient cities.
When we examined the international collaboration patterns in both the cases of all articles and in highly cited articles produced in the top 100 cities and produced in the bottom 100 cities, respectively, we found that they differ in the frequency of occurrence of the top collaborators. However, the countries with which they collaborate (i.e., the location of co-authors) were primarily the same. As for the scientific disciplines, there are significant differences between the top 100 cities and the bottom 100 cities in not only the frequency of occurrence of the most productive disciplines but also in the disciplines themselves. In the most efficient cities, highly cited articles are produced in disciplines that are in natural sciences and health sciences to almost the same degree, while, in the least efficient cities, health disciplines are rather marginal. Furthermore, the frequency of occurrence of the discipline of 'science, technology, and other topics' is much higher in articles produced in the most efficient cities than in articles produced in the least efficient cities. This fact suggests that, in the most efficient cities, a greater number of articles are published in high-impact multidisciplinary journals than in the least efficient cities.
In this paper, I examined whether there were general factors influencing cities' publishing efficiency (i.e., the ratio of highly cited articles to all articles produced in that city). I have found the following five fundamental factors: the dominance of the English language, cities' economic development level, the location of excellent organisations, cities' international collaboration patterns, and the productivity of scientific disciplines.
The dominance of the English language seems to be one of the most (if not the most) significant factors influencing cities' publishing efficiency. About three-quarters of the most efficient cities are in countries in the Anglosphere–Core, and the rest of are in Northern and Western European countries. Contrary to the most efficient cites, 99% of the least efficient cities are in countries outside the Anglosphere–Core.
The economic development level of cities (derived from country-level data) as a factor influencing the publishing efficiency seems less significant than the linguistic environment. Results show that 98% of the most efficient cities are in high-income countries. It might suggest that there is a relationship between cities' high-income level and cities' high publishing efficiency, but it turned out that one-third of the least efficient cities were also located in high-income countries. The reason for this is that countries that are home to cities with low efficiency but high-income level do not belong to the Anglosphere, reinforcing the fact that the dominance of the English language (i.e., the linguistic environment of cities) as a factor has a greater significance in influencing cities' publishing efficiency than the cities' economic development level has.
It is well-known fact that scientific publications are primarily produced by universities. We can assume that the most efficient cities should be home to the most prestigious universities in the world, while top-ranked universities are not expected to be in the least efficient cities. Results show that this hypothesis is basically correct, at least when we focus on the location of top-ranked universities in the least efficient cities. However, the picture is more complex in the case of the most efficient cities, because half of those cities are not home to top-ranked universities. Moreover, many top-ranked universities are in cities that are not the most efficient cities. The reason for this is that there are many towns and small or mid–sized cities that are home to world-renowned national or international research institutions producing even higher publishing efficiency than top-ranked universities. These settlements are all characterised by the fact that they are within metropolitan areas, while the research institutions they host operate under the umbrella of prestigious research universities.
In the case of the highly cited articles, an overlap can be detected between the international collaboration patterns of the most efficient cities and the least efficient cities. In both cases, the top collaborators are the United States (primarily in the top position), Germany, England, France, Canada, and Australia/Italy. If we merely focus on who the top collaborators of cities are, we cannot predict whether its publishing efficiency will be high. However, the magnitude of the collaboration intensity between cities (more precisely the authors affiliated with those cities) and the leading countries in science (more precisely the co-authors located in those countries) even more significantly influences cities' publishing efficiency. The higher the collaboration intensity is, the more likely it is that cities will produce high publishing efficiency.
In the most efficient cities, highly cited articles are produced in disciplines of natural sciences and health sciences to the same degree. In the least efficient cities, almost all highly cited articles are produced in the field of natural sciences (primarily in chemistry), while hardly any articles are published in health sciences. In the case of both groups of cities, 'science, technology, and other topics' is the most frequently occurring discipline in highly cited articles; however, its frequency of occurrence in articles produced in the most efficient cities is much higher than in the least efficient cities.
Based on the above research results, we can draw the conclusion that a city's publishing efficiency will be high if meets the following conditions:
It is in a country in the Anglosphere–Core;
It is in a high-income country;
It is home to top-ranked universities and/or world-renowned research institutions;
Researchers affiliated with that city most intensely collaborate with researchers affiliated with cities in the United States, Germany, England, France, Canada, and Australia/Italy; and
The most productive scientific disciplines of highly cited articles are 'science, technology, and other topics' (i.e., most articles are published in high-impact multidisciplinary journals), disciplines in health sciences (especially general internal medicine and oncology), and disciplines in natural sciences (especially physics, astronomy, and astrophysics).
Approximately 60% of the top 100 most efficient cities meet the above criteria, but if we expand the geographical dimension beyond the Anglosphere, 86% of the top 100 cities will meet the criteria.
Most of the bottom 100 least efficient cities are in countries outside the Anglosphere. If we do not consider the determinant significance of the linguistic factor, the patterns of the Japanese, South Korean, and European cities resemble the patterns of the most efficient cities. All of them are in high-income countries and have more or less similar international collaboration patterns as that of the most efficient cities. Moreover, most of the highly cited articles are produced in similar disciplines (although disciplines in natural sciences are overrepresented). Naturally, there are several top-ranked and prestigious universities and research institutions in Japanese and South Korean cities (especially in Tokyo, Kyoto, Nagoya, Osaka, and Seoul); yet, they produce low publishing efficiency.
The question is: What can the city administration do to increase the city's performance in science (e.g., to increase the city's publishing efficiency)? Naturally, cities have limited opportunities to compete for components of the science establishment. Universities, hospitals and most governmental research institutions are generally tied to their original loci. However, cities can compete to attract innovation-oriented companies, high tech firms, and R&D facilities of multinational companies by for example establishing science parks. The positive effect of this process on the city's performance in science can be observed in the example of Beijing (Andersson et al. 2014; Liefner et al. 2006; Zhou 2005). Furthermore, cities can compete to acquire cutting-edge international research facilities. For example, in 2009, founding member states of the European Spallation Source (ESS) (the most powerful linear proton accelerator in the world) decided to support for placing ESS in Lund, selecting it from the competition of three European cities. The ESS will attract thousands of researchers from all over the world to Lund.
Some of the further research directions based upon the results of the study are as follows: What kind of local factors influence cities' publishing efficiency? If publishing efficiency is an indicator of cities' performance in science, what can city administrations do to improve it? If cities have very different sizes and populations (even publication output) worldwide, what kind of territorial demarcation can be introduced to balance these differences?
j.jdis-2018-0014.tab.008.w2aab3b8c25b1b7b1ab2b2ab1aAa
1 France Villejuif 6.174
2 USA Menlo Park, CA 5.676
3 USA Princeton, NJ 4.978
4 USA Cambridge, MA 4.670
5 USA Stanford, CA 4.658
6 Saudi Arabia Jeddah 4.541
7 USA Santa Cruz, CA 4.430
8 USA Pasadena, CA 4.400
9 USA San Francisco, CA 3.993
10 USA Berkeley, CA 3.932
11 USA Upton, NY 3.920
12 USA Bethesda, MD 3.912
13 USA Seattle, WA 3.870
14 USA Rochester, MN 3.830
15 USA Santa Barbara, CA 3.778
16 USA Boston, MA 3.742
17 USA Greenbelt, MD 3.679
18 USA Rockville, MD 3.667
19 USA Richland, WA 3.618
20 Switzerland Geneva 3.566
21 USA New Haven, CT 3.565
22 UK Oxford 3.427
23 USA Durham, NC 3.400
24 USA Evanston, IL 3.388
25 UK Didcot 3.366
26 USA Boulder, CO 3.288
27 USA Dallas, TX 3.272
28 USA New York, NY 3.262
29 Italy Perugia 3.219
30 USA Riverside, CA 3.201
31 Germany Heidelberg 3.177
32 UK Cambridge 3.171
33 UK Brighton 3.162
34 USA Nashville, TN 3.122
35 France Créteil 3.111
36 Israel Rehovot 3.096
37 USA Portland, OR 3.091
38 USA Palo Alto, CA 3.080
39 Switzerland Basel 3.050
40 Italy Trieste 3.036
41 USA St. Louis, MO 3.029
42 Netherlands Rotterdam 3.028
43 Canada Vancouver, BC 3.024
44 UK Norwich 3.006
45 USA Aurora, CO 3.004
46 USA Atlanta, GA 2.982
47 UK Lancaster 2.976
48 Netherlands Nijmegen 2.963
49 USA San Antonio, TX 2.961
50 France Gif-sur-Yvette 2.955
51 USA Los Angeles, CA 2.948
52 USA Chapel Hill, NC 2.937
53 Canada Victoria, BC 2.924
54 UK Dundee 2.916
55 USA Philadelphia, PA 2.907
56 UK Leicester 2.906
57 UK Edinburgh 2.898
58 USA Research Triangle Park, NC 2.897
59 South Africa Cape Town 2.896
60 Netherlands Wageningen 2.886
61 Germany Garching bei München 2.877
62 USA Baltimore, MD 2.866
63 Switzerland Lausanne 2.866
64 Denmark Copenhagen 2.855
65 USA Rochester, NY 2.852
66 USA Houston, TX 2.846
67 Estonia Tartu 2.842
68 USA Providence, RI 2.840
69 USA Denver, CO 2.837
70 USA Birmingham, AL 2.826
71 South Africa Durban 2.818
72 France Clermont-Ferrand 2.802
73 USA Ann Arbor, MI 2.800
74 Italy Ferrara 2.797
75 USA Cleveland, OH 2.788
76 Canada Hamilton, ON 2.768
77 UK Southampton 2.758
78 UK Cardiff 2.738
79 UK Exeter 2.738
80 USA San Diego, CA 2.734
81 USA Hanover, NH 2.715
82 Germany Mainz 2.714
83 USA Gaithersburg, MD 2.691
84 USA Worcester, MA 2.687
85 Switzerland Villigen 2.686
86 UK Birmingham 2.685
87 Denmark Lyngby 2.684
88 Germany Bonn 2.678
89 Canada Toronto, ON 2.660
90 UK Newcastle 2.658
91 Switzerland Bern 2.657
92 USA Amherst, MA 2.652
93 USA Eugene, OR 2.650
94 Netherlands Amsterdam 2.643
95 USA Chicago, IL 2.638
96 Germany Essen 2.627
97 Belgium Brussels 2.614
98 Italy Pavia 2.611
99 USA Winston-Salem, NC 2.594
100 USA Tallahassee, FL 2.591
j.jdis-2018-0014.tab.009.w2aab3b8c25b1b7b1ab2b2b1b1Aa
1 Tunisia Sfax 0.132
2 Russia Yekaterinburg 0.161
3 South Korea Cheonan 0.260
4 Iran Shiraz 0.268
5 Romania Iași 0.273
6 India Kharagpur 0.283
7 China Mianyang 0.325
8 Poland Lublin 0.333
9 Brazil São Carlos 0.333
10 China Wenzhou 0.348
11 India Varanasi 0.399
12 China Shijiazhuang 0.416
13 South Korea Cheongju 0.424
14 Japan Gifu 0.436
15 Iran Tabriz 0.444
16 Chile Concepción 0.452
17 Brazil Curitiba 0.454
18 Japan Kumamoto 0.456
19 Malaysia Serdang 0.462
20 Tunisia Tunis 0.484
21 Egypt Giza 0.487
22 China Nantong 0.494
23 Israel Beer-Sheva 0.501
24 Japan Ibaraki 0.503
25 India Kanpur 0.513
26 China Baoding 0.516
27 Turkey Konya 0.535
28 South Korea Busan 0.537
29 Iran Tehran 0.550
30 China Shenyang 0.551
31 Egypt Alexandria 0.552
32 Japan Niigata 0.556
33 France Villeneuve-d'Ascq 0.562
34 Spain Alicante 0.563
35 South Korea Gwangju 0.563
36 South Korea Jeonju 0.566
37 Brazil Fortaleza 0.567
38 Poland Poznań 0.568
39 Brazil Viçosa 0.576
40 Turkey Izmir 0.587
41 India Lucknow 0.587
42 Portugal Aveiro 0.587
43 China Zhengzhou 0.588
44 China Guilin 0.594
45 China Yantai 0.595
46 South Korea Daejeon 0.596
47 Brazil Belo Horizonte 0.601
48 India Kolkata 0.606
49 China Ürümqi 0.613
50 India Chennai 0.617
51 Japan Shizuoka 0.626
52 China Nanning 0.632
53 India Hyderabad 0.632
54 Japan Saitama 0.634
55 Japan Kawasaki 0.649
56 Brazil Recife 0.654
57 Italy Messina 0.661
58 Egypt Cairo 0.670
59 Turkey Istanbul 0.673
60 China Changzhou 0.682
61 South Korea Yongin 0.682
62 China Kunming 0.689
63 Pakistan Lahore 0.690
64 Japan Sapporo 0.695
65 Argentina Córdoba 0.720
66 Japan Kanazawa 0.732
67 Poland Gdańsk 0.736
68 Poland Wrocław 0.736
69 China Qingdao 0.737
70 Ukraine Kiev 0.749
71 China Jinan 0.749
72 China Xinxiang 0.754
73 India New Delhi 0.755
74 Poland Łódź 0.756
75 China Ningbo 0.758
76 India Bangalore 0.783
77 South Korea Jinju 0.783
78 Turkey Ankara 0.791
79 Japan Chiba 0.796
80 Japan Sagamihara 0.798
81 South Africa Pretoria 0.801
82 Russia Novosibirsk 0.804
83 South Korea Goyang 0.804
84 South Korea Daegu 0.806
85 South Korea Seoul 0.807
86 China Nanchang 0.809
87 China Taiyuan 0.810
88 China Guiyang 0.813
89 India Roorkee 0.829
90 Russia Moscow 0.836
91 China Wuxi 0.840
92 Brazil Porto Alegre 0.844
93 Brazil Florianópolis 0.855
94 Russia Saint Petersburg 0.856
95 India Mumbai 0.865
96 Japan Sendai 0.865
97 UK Loughborough 0.868
98 China Xuzhou 0.869
99 Brazil Campinas 0.884
100 Poland Kraków 0.891
Abramo, G., & D'Angelo, C. A. (2015). Evaluating university research: Same performance indicator, different rankings. Journal of Informetrics, 9(3), 514−525.AbramoG.D'AngeloC. A.2015Evaluating university research: Same performance indicator, different rankingsJournal of Informetrics9351452510.1016/j.joi.2015.04.002Search in Google Scholar
Abramo, G., D'Angelo, A. C., & Murgia, G. (2017). The relationship among research productivity, research collaboration, and their determinants. Journal of Informetrics, 11(4), 1016−1030.AbramoG.D'AngeloA. C.MurgiaG.2017The relationship among research productivity, research collaboration, and their determinantsJournal of Informetrics1141016103010.1016/j.joi.2017.09.007Search in Google Scholar
Adams, J. (2013). Collaborations: The fourth age of research. Nature, 497(7451), 557−560.AdamsJ.2013Collaborations: The fourth age of researchNature497745155756010.1038/497557a23719446Search in Google Scholar
Andersson, D. E., Gunessee, S., Matthiessen, C. W., & Find, S. (2014). The geography of Chinese science. Environment and Planning A, 46(12), 2950−2971.AnderssonD. E.GunesseeS.MatthiessenC. W.FindS.2014The geography of Chinese scienceEnvironment and Planning A46122950297110.1068/a130283pSearch in Google Scholar
Archambault, É., & Larivière, V. (2011). Scientific publications and patenting by companies: A study of the whole population of Canadian firms over 25 years. Science and Public Policy, 38(4), 269–278.ArchambaultÉLarivièreV.2011Scientific publications and patenting by companies: A study of the whole population of Canadian firms over 25 yearsScience and Public Policy38426927810.3152/030234211X12924093660192Search in Google Scholar
Bennett, J. C. (2007). The Anglosphere Challenge: Why the English-Speaking Nations Will Lead the Way in the Twenty-First Century. Lanham, Maryland, USA: Rowman & Littlefield Publishers.BennettJ. C.2007The Anglosphere Challenge: Why the English-Speaking Nations Will Lead the Way in the Twenty-First CenturyLanham, Maryland, USARowman & Littlefield PublishersSearch in Google Scholar
Björkman, B. (2011). Pragmatic strategies in English as an academic lingua franca: Ways of achieving communicative effectiveness? Journal of Pragmatics, 43(4), 950−964.BjörkmanB.2011Pragmatic strategies in English as an academic lingua franca: Ways of achieving communicative effectiveness?Journal of Pragmatics43495096410.1016/j.pragma.2010.07.033Search in Google Scholar
Bornmann, L., & Leydesdorff, L. (2011). Which cities produce more excellent papers than can be expected? A new mapping approach, using Google Maps, based on statistical significance testing. Journal of the American Society for Information Science and Technology, 62(10), 1954−1962.BornmannL.LeydesdorffL.2011Which cities produce more excellent papers than can be expected? A new mapping approach, using Google Maps, based on statistical significance testingJournal of the American Society for Information Science and Technology62101954196210.1002/asi.21611Search in Google Scholar
Bornmann, L., & Leydesdorff, L. (2012). Which are the best performing regions in information science in terms of highly cited papers? Some improvements of our previous mapping approaches. Journal of Informetrics, 6(2), 336−345.BornmannL.LeydesdorffL.2012Which are the best performing regions in information science in terms of highly cited papers? Some improvements of our previous mapping approachesJournal of Informetrics6233634510.1016/j.joi.2011.11.002Search in Google Scholar
Bornmann, L., Leydesdorff, L., Walch-Solimena, C., & Ettl, C. (2011). Mapping excellence in the geography of science: An approach based on Scopus data. Journal of Informetrics, 5(4), 537−546.BornmannL.LeydesdorffL.Walch-SolimenaC.EttlC.2011Mapping excellence in the geography of science: An approach based on Scopus dataJournal of Informetrics5453754610.1016/j.joi.2011.05.005Search in Google Scholar
Bornmann, L., & Waltman, L. (2011). The detection of "hot regions" in the geography of science-A visualization approach by using density maps. Journal of Informetrics, 5(4), 547−553.BornmannL.WaltmanL.2011The detection of "hot regions" in the geography of science-A visualization approach by using density mapsJournal of Informetrics5454755310.1016/j.joi.2011.04.006Search in Google Scholar
Braun, T., Glänzel, W., & Schubert, A. (1989). Some data on the distribution of journal publication types in the science citation index database. Scientometrics, 15(5–6), 325−330.BraunT.GlänzelW.SchubertA.1989Some data on the distribution of journal publication types in the science citation index databaseScientometrics155–632533010.1007/BF02017057Search in Google Scholar
Braun, T., & Dióspatonyi, I. (2005). World flash on basic research: The counting of core journal gatekeepers as science indicators really counts. The scientific scope of action and strength of nations. Scientometrics, 62(3), 297−319.BraunT.DióspatonyiI.2005World flash on basic research: The counting of core journal gatekeepers as science indicators really counts. The scientific scope of action and strength of nationsScientometrics62329731910.1007/s11192-005-0023-7Search in Google Scholar
Braun, T., Zsindely, S., Dióspatonyi, I., & Zádor, E. (2007). Gatekeeping patterns in nano-titled journals. Scientometrics, 70(3), 651−667.BraunT.ZsindelyS.DióspatonyiI.ZádorE.2007Gatekeeping patterns in nano-titled journalsScientometrics70365166710.1007/s11192-007-0306-2Search in Google Scholar
Butler, Y.G., & Iino, M. (2005). Current Japanese reforms in English language education: The 2003 "action plan". Language Policy, 4(1), 25−45.ButlerY.G.IinoM.2005Current Japanese reforms in English language education: The 2003 "action plan"Language Policy41254510.1007/s10993-004-6563-5Search in Google Scholar
Campbell, L. (2010). Language Isolates and Their History, or, What's Weird, Anyway? In Proceedings of the 36th Annual Meeting of the Berkeley Linguistics Society (pp. 16−31). Berkeley: Berkeley Linguistics Society.CampbellL.2010Language Isolates and Their History, or, What's Weird, Anyway?Proceedings of the 36th Annual Meeting of the Berkeley Linguistics Society1631BerkeleyBerkeley Linguistics Society10.3765/bls.v36i1.3900Search in Google Scholar
Castelvecchi, D. (2015). Physics paper sets record with more than 5,000 authors. Nature News, 15/05/2015. 10.1038/nature.2015.17567CastelvecchiD.2015Physics paper sets record with more than 5,000 authorsNature News, 15/05/201510.1038/nature.2015.17567Apri DOISearch in Google Scholar
Csomós, G., & Tóth, G. (2016). Exploring the position of cities in global corporate research and development: A bibliometric analysis by two different geographical approaches. Journal of Informetrics, 10(2), 516−532.CsomósG.TóthG.2016Exploring the position of cities in global corporate research and development: A bibliometric analysis by two different geographical approachesJournal of Informetrics10251653210.1016/j.joi.2016.02.004Search in Google Scholar
Csomós, G. (2018). A spatial scientometric analysis of the publication output of cities worldwide. Journal of Informetrics, 12(2), 547−566.CsomósG.2018A spatial scientometric analysis of the publication output of cities worldwideJournal of Informetrics12254756610.1016/j.joi.2018.05.003Search in Google Scholar
de Almeida, E. C. E., & Guimarães, J. A. (2013). Brazil's growing production of scientific articles-how are we doing with review articles and other qualitative indicators? Scientometrics, 97(2), 287−315.de AlmeidaE. C. E.GuimarãesJ. A.2013Brazil's growing production of scientific articles-how are we doing with review articles and other qualitative indicators?Scientometrics97228731510.1007/s11192-013-0967-ySearch in Google Scholar
de Solla Price, D. (1978). Toward a model for science indicators. In Y. Elkana, J. Lederberg, R. K. Merton, A. Thackray, H. Zuckerman (Eds.), Toward a Metric of Science: The Advent of Science Indicators (pp. 69–95). New York: John Wiley & Sons.de Solla PriceD.1978Toward a model for science indicatorsElkanaY.LederbergJ.MertonR. K.ThackrayA.ZuckermanH.Toward a Metric of Science: The Advent of Science Indicators6995New YorkJohn Wiley & SonsSearch in Google Scholar
Docampo, D., & Cram, L. (2014). On the internal dynamics of the Shanghai ranking. Scientometrics, 98(2), 1347−1366.DocampoD.CramL.2014On the internal dynamics of the Shanghai rankingScientometrics9821347136610.1007/s11192-013-1143-0Search in Google Scholar
Docampo, D., Egret, D., & Cram, L. (2015). The effect of university mergers on the Shanghai ranking. Scientometrics, 104(1), 175−191.DocampoD.EgretD.CramL.2015The effect of university mergers on the Shanghai rankingScientometrics104117519110.1007/s11192-015-1587-5Search in Google Scholar
Frenken, K., Heimeriks, G. J., & Hoekman, J. (2017). What drives university research performance? An analysis using the CWTS Leiden Ranking data. Journal of Informetrics, 11(3), 859−872.FrenkenK.HeimeriksG. J.HoekmanJ.2017What drives university research performance? An analysis using the CWTS Leiden Ranking dataJournal of Informetrics11385987210.1016/j.joi.2017.06.006Search in Google Scholar
Grossetti, M., Eckert, D., Gingras, Y., Jégou, L., Larivière, V., & Milard, B. (2014). Cities and the geographical deconcentration of scientific activity: A multilevel analysis of publications (1987–2007). Urban Studies, 51(10), 2219−2234.GrossettiM.EckertD.GingrasY.JégouL.LarivièreV.MilardB.2014Cities and the geographical deconcentration of scientific activity: A multilevel analysis of publications1987-2007Urban Studies51102219223410.1177/0042098013506047Search in Google Scholar
Gupta, B. M., Kshitij, A., & Verma, C. (2011). Mapping of Indian computer science research output, 1999-2008. Scientometrics, 86(2), 261-283.GuptaB. M.KshitijA.VermaC.2011Mapping of Indian computer science research output, 1999-2008Scientometrics86226128310.1007/s11192-010-0272-ySearch in Google Scholar
He, T. (2009). International scientific collaboration of China with the G7 countries. Scientometrics, 80(3), 571−582.HeT.2009International scientific collaboration of China with the G7 countriesScientometrics80357158210.1007/s11192-007-2043-ySearch in Google Scholar
Hicks, D. (1995). Published papers, tacit competencies and corporate management of the public/private character of knowledge. Industrial and Corporate Change, 4(2), 401–424.HicksD.1995Published papers, tacit competencies and corporate management of the public/private character of knowledgeIndustrial and Corporate Change4240142410.1093/icc/4.2.401Search in Google Scholar
Iwai, Y. (2008). The perceptions of Japanese students toward academic English reading: Implications for effective ESL reading strategies. Multicultural Education, 15(4), 45–50.IwaiY.2008The perceptions of Japanese students toward academic English reading: Implications for effective ESL reading strategiesMulticultural Education1544550Search in Google Scholar
Kato, M., & Ando, A. (2017). National ties of international scientific collaboration and researcher mobility found in Nature and Science. Scientometrics, 110(2), 673−694.KatoM.AndoA.2017National ties of international scientific collaboration and researcher mobility found in Nature and ScienceScientometrics110267369410.1007/s11192-016-2183-zSearch in Google Scholar
Kealey, T. (1996). The Economic Laws of Scientific Research. New York: St. Martin's Press.KealeyT.1996The Economic Laws of Scientific ResearchNew YorkSt. Martin's Press10.1007/978-1-349-24667-0Search in Google Scholar
Kim, H., Yoon, J. W., & Crowcroft, J. (2012). Network analysis of temporal trends in scholarly research productivity. Journal of Informetrics, 6(1), 97−110.KimH.YoonJ. W.CrowcroftJ.2012Network analysis of temporal trends in scholarly research productivityJournal of Informetrics619711010.1016/j.joi.2011.05.006Search in Google Scholar
King, D. A. (2004). The scientific impact of nations. What different countries get for their research spending. Nature, 430, 311–316.KingD. A.2004The scientific impact of nations. What different countries get for their research spendingNature43031131610.1038/430311a15254529Search in Google Scholar
Kumar, S., & Garg, K. C. (2005). Scientometrics of computer science research in India and China. Scientometrics, 64(2), 121-132.KumarS.GargK. C.2005Scientometrics of computer science research in India and ChinaScientometrics64212113210.1007/s11192-005-0244-9Search in Google Scholar
Larivière, V., Gingras, Y., & Archambault, É. (2006). Canadian collaboration networks: A comparative analysis of the natural sciences, social sciences and the humanities. Scientometrics, 68(3), 519−533.LarivièreV.GingrasY.ArchambaultÉ.2006Canadian collaboration networks: A comparative analysis of the natural sciences, social sciences and the humanitiesScientometrics68351953310.1007/s11192-006-0127-8Search in Google Scholar
Lee, L. C., Lin, P. H., Chuang, Y. W., & Lee, Y. Y. (2011). Research output and economic productivity: A Granger causality test. Scientometrics, 89(2), 465−478.LeeL. C.LinP. H.ChuangY. W.LeeY. Y.2011Research output and economic productivity: A Granger causality testScientometrics89246547810.1007/s11192-011-0476-9Search in Google Scholar
Leta, J., Glänzel, W., & Thijs, B. (2006). Science in Brazil. Part 2: Sectoral and institutional research profiles. Scientometrics, 67(1), 87-105.LetaJ.GlänzelW.ThijsB.2006Science in Brazil. Part 2: Sectoral and institutional research profiles. Scientometrics67187105Search in Google Scholar
Leydesdorff, L., & Wagner, C. (2009). Is the United States losing ground in science? A global perspective on the world science system. Scientometrics, 78(1), 23-36.LeydesdorffL.WagnerC.2009Is the United States losing ground in science? A global perspective on the world science systemScientometrics781233610.1007/s11192-008-1830-4Search in Google Scholar
Leydesdorff, L., Wagner, C. S., & Bornmann, L. (2014). The European Union, China, and the United States in the top-1% and top-10% layers of most-frequently cited publications: Competition and collaborations. Journal of Informetrics, 8(3), 606−617.LeydesdorffL.WagnerC. S.BornmannL.2014The European Union, China, and the United States in the top-1% and top-10% layers of most-frequently cited publications: Competition and collaborationsJournal of Informetrics8360661710.1016/j.joi.2014.05.002Search in Google Scholar
Li, J., Qiao, L., Li, W., & Jin, Y. (2014). Chinese-language articles are not biased in citations: Evidences from Chinese-English bilingual journals in Scopus and Web of Science. Journal of Informetrics, 8(4), 912–916.LiJ.QiaoL.LiW.JinY.2014Chinese-language articles are not biased in citations: Evidences from Chinese-English bilingual journals in Scopus and Web of ScienceJournal of Informetrics8491291610.1016/j.joi.2014.09.003Search in Google Scholar
Liefner, I., Hennemann, S., & Lu, X. (2006). Cooperation in the innovation process in developing countries: Empirical evidence from Zhongguancun, Beijing. Environment and Planning A, 38(1), 111–130.LiefnerI.HennemannS.LuX.2006Cooperation in the innovation process in developing countries: Empirical evidence from Zhongguancun, BeijingEnvironment and Planning A38111113010.1068/a37343Search in Google Scholar
Lin, C. S., Huang, M. H., & Chen, D. Z. (2013). The influences of counting methods on university rankings based on paper count and citation count. Journal of Informetrics, 7(3), 611−621.LinC. S.HuangM. H.ChenD. Z.2013The influences of counting methods on university rankings based on paper count and citation countJournal of Informetrics7361162110.1016/j.joi.2013.03.007Search in Google Scholar
López-navarro, I., Moreno, A. I., Quintanilla, M. Á., & Rey-Rocha, J. (2015). Why do I publish research articles in English instead of my own language? Differences in Spanish researchers' motivations across scientific domains. Scientometrics, 103(3), 939-976.López-NavarroI.MorenoA. I.QuintanillaM. Á.Rey-RochaJ.2015Why do I publish research articles in English instead of my own language? Differences in Spanish researchers' motivations across scientific domainsScientometrics103393997610.1007/s11192-015-1570-1Search in Google Scholar
Lu, K., & Wolfram, D. (2010). Geographic characteristics of the growth of informetrics literature 1987-2008. Journal of Informetrics, 4(4), 591−601.LuK.WolframD.2010Geographic characteristics of the growth of informetrics literature 1987-2008Journal of Informetrics4459160110.1016/j.joi.2010.06.008Search in Google Scholar
Maisonobe, M., Eckert, D., Grossetti, M., Jégou, L., & Milard, B. (2016). The world network of scientific collaborations between cities: domestic or international dynamics? Journal of Informetrics, 10(4), 1025−1036.MaisonobeM.EckertD.GrossettiM.JégouL.MilardB.2016The world network of scientific collaborations between cities: domestic or international dynamics?Journal of Informetrics1041025103610.1016/j.joi.2016.06.002Search in Google Scholar
Maisonobe, M., Grossetti, M., Milard, B., Jégou, L., & Eckert, D. (2017). The global geography of scientific visibility: a deconcentration process (1999–2011). Scientometrics, 113(1), 479−493.MaisonobeM.GrossettiM.MilardB.JégouL.EckertD.2017The global geography of scientific visibility: a deconcentration process1999–2011Scientometrics113147949310.1007/s11192-017-2463-2Search in Google Scholar
Matthiessen, C.W., & Schwarz, A.W. (1999). Scientific centres in Europe: An analysis of research strength and patterns of specialisation based on bibliometric indicators. Urban Studies, 36(3), 453−477.MatthiessenC.W.SchwarzA.W.1999Scientific centres in Europe: An analysis of research strength and patterns of specialisation based on bibliometric indicatorsUrban Studies36345347710.1080/0042098993475Search in Google Scholar
Meo, S. A., Al Masri, A. A., Usmani, A. M., Memon, A. N., & Zaidi, S. Z. (2013). Impact of GDP, Spending on R&D, Number of Universities and Scientific Journals on Research Publications among Asian Countries. PLoS ONE, 8(6), e66449MeoS. A.Al MasriA. A.UsmaniA. M.MemonA. N.ZaidiS. Z.2013Impact of GDP, Spending on R&D, Number of Universities and Scientific Journals on Research Publications among Asian CountriesPLoS ONE86e6644910.1371/journal.pone.0066449368876123840471Search in Google Scholar
Miyairi, N., & Chang, H. W. (2012). Bibliometric characteristics of highly cited papers from Taiwan, 2000-2009. Scientometrics, 92(1), 197−205.MiyairiN.ChangH. W.2012Bibliometric characteristics of highly cited papers from Taiwan, 2000-2009Scientometrics92119720510.1007/s11192-012-0722-9Search in Google Scholar
Moin, M., Mahmoudi, M., & Rezaei, N. (2005). Scientific output of Iran at the threshold of the 21st century. Scientometrics, 62(2), 239−248.MoinM.MahmoudiM.RezaeiN.2005Scientific output of Iran at the threshold of the 21st centuryScientometrics62223924810.1007/s11192-005-0017-5Search in Google Scholar
Mongeon, P., & Paul-Hus, A. (2016). The journal coverage of Web of Science and Scopus: a comparative analysis. Scientometrics, 106(1), 213−228.MongeonP.Paul-HusA.2016The journal coverage of Web of Science and Scopus: a comparative analysisScientometrics106121322810.1007/s11192-015-1765-5Search in Google Scholar
Morrison, J. (2014). China becomes world's third-largest producer of research articles. Nature News, 06/02/2014. 10.1038/nature.2014.14684MorrisonJ.2014China becomes world's third-largest producer of research articlesNature News06/02/201410.1038/nature.2014.14684Apri DOISearch in Google Scholar
Nature Index (2016). US tops global research performance. Nature Index, 20/04/2016. https://www.natureindex.com/news-blog/us-tops-global-research-performanceNatureIndex2016US tops global research performanceNature Index, 20/04/2016https://www.natureindex.com/news-blog/us-tops-global-research-performanceSearch in Google Scholar
Paasi, A. (2005). Globalisation, academic capitalism, and the uneven geographies of international journal publishing spaces. Environment and Planning A, 37(5), 769–789.PaasiA.2005Globalisation, academic capitalism, and the uneven geographies of international journal publishing spacesEnvironment and Planning A37576978910.1068/a3769Search in Google Scholar
Pan, K. R., Kaski, K., & Fortunato, S. (2012). World citation and collaboration networks: uncovering the role of geography in science. Scientific Reports, 2, 902PanK. R.KaskiK.FortunatoS.2012World citation and collaboration networks: uncovering the role of geography in scienceScientific Reports290210.1038/srep00902350935023198092Search in Google Scholar
Paul-Hus, A., Mongeon, P., Sainte-marie, M., & Larivière, V. (2017). The sum of it all: Revealing collaboration patterns by combining authorship and acknowledgements. Journal of Informetrics, 11(1), 80−87.Paul-HusA.MongeonP.Sainte-MarieM.LarivièreV.2017The sum of it all: Revealing collaboration patterns by combining authorship and acknowledgementsJournal of Informetrics111808710.1016/j.joi.2016.11.005Search in Google Scholar
Piro, F. N., & Sivertsen, G. (2016). How can differences in international university rankings be explained? Scientometrics, 109(3), 2263−2278.PiroF. N.SivertsenG.2016How can differences in international university rankings be explained?Scientometrics10932263227810.1007/s11192-016-2056-5Search in Google Scholar
Shehatta, I., & Mahmood, K. (2016). Correlation among top 100 universities in the major six global rankings: policy implications. Scientometrics, 109(2), 1231−1254.ShehattaI.MahmoodK.2016Correlation among top 100 universities in the major six global rankings: policy implicationsScientometrics10921231125410.1007/s11192-016-2065-4Search in Google Scholar
Sud, P., & Thelwall, M. (2016). Not all international collaboration is beneficial: The Mendeley readership and citation impact of biochemical research collaboration. Journal of the Association for Information Science and Technology, 67(8), 1849−1857.SudP.ThelwallM.2016Not all international collaboration is beneficial: The Mendeley readership and citation impact of biochemical research collaborationJournal of the Association for Information Science and Technology6781849185710.1002/asi.23515Search in Google Scholar
Tardy, C. (2004). The role of English in scientific communication: Lingua franca or Tyrannosaurus rex? Journal of English for Academic Purposes, 3(3), 247−269.TardyC.2004The role of English in scientific communication: Lingua franca or Tyrannosaurus rex?Journal of English for Academic Purposes3324726910.1016/j.jeap.2003.10.001Search in Google Scholar
Tian, P. (2016). China's diaspora key to science collaborations. Nature Index, 23/06/2016. https://www.natureindex.com/news-blog/chinas-diaspora-key-to-science-collaborationsTianP.2016China's diaspora key to science collaborationsNature Index, 23/06/2016https://www.natureindex.com/news-blog/chinas-diaspora-key-to-science-collaborationsSearch in Google Scholar
Uddin, S., Hossain, L., Abbasi, A., & Rasmussen, K. (2012). Trend and efficiency analysis of co-authorship network. Scientometrics, 90(2), 687–699.UddinS.HossainL.AbbasiA.RasmussenK.2012Trend and efficiency analysis of co-authorship networkScientometrics90268769910.1007/s11192-011-0511-xSearch in Google Scholar
Van Noorden, R. (2010). Cities: Building the best cities for science. Nature, 467(7318), 906−908.Van NoordenR.2010Cities: Building the best cities for scienceNature467731890690810.1038/467906a20962821Search in Google Scholar
Van Raan, A. F. J. (1998). The influence of international collaboration on the impact of research results: Some simple mathematical considerations concerning the role of self-citations. Scientometrics, 42(3), 423-428.Van RaanA. F. J.1998The influence of international collaboration on the impact of research results: Some simple mathematical considerations concerning the role of self-citationsScientometrics42342342810.1007/BF02458380Search in Google Scholar
Van Weijen, D. (2012). The Language of (Future) Scientific Communication. Research Trends, 31, 11/2012. https://www.researchtrends.com/issue-31-november-2012/the-language-of-future-scientific-communication/Van WeijenD.2012The Language of (Future) Scientific Communication. Research Trends, 31, 11/2012https://www.researchtrends.com/issue-31-november-2012/the-language-of-future-scientific-communication/Search in Google Scholar
Vinkler, P. (2008). Correlation between the structure of scientific research, scientometric indicators and GDP in EU and non-EU countries. Scientometrics, 74(2), 237−254.VinklerP.2008Correlation between the structure of scientific research, scientometric indicators and GDP in EU and non-EU countriesScientometrics74223725410.1007/s11192-008-0215-zSearch in Google Scholar
Vinkler, P. (2010). The Evaluation of Research by Scientometric Indicators. Oxford: Chandos Publishing.VinklerP.2010The Evaluation of Research by Scientometric IndicatorsOxfordChandos Publishing10.1533/9781780630250Search in Google Scholar
Wang, X., Xu, S., Wang, Z., Peng, L., & Wang, C. (2013). International scientific collaboration of China: Collaborating countries, institutions and individuals. Scientometrics, 95(3), 885−894.WangX.XuS.WangZ.PengL.WangC.2013International scientific collaboration of China: Collaborating countries, institutions and individualsScientometrics95388589410.1007/s11192-012-0877-4Search in Google Scholar
Xie, Y., Zhang, C., & Lai, Q. (2014). China's rise as a major contributor to science and technology. Proceedings of the National Academy of Sciences of the United States of America, 111(26), 9437−9442.XieY.ZhangC.LaiQ.2014China's rise as a major contributor to science and technologyProceedings of the National Academy of Sciences of the United States of America111269437944210.1073/pnas.1407709111408443624979796Search in Google Scholar
Zhang, H., & Guo, H. (1997). Scientific research collaboration in China. Scientometrics, 38(2), 309−319.ZhangH.GuoH.1997Scientific research collaboration in ChinaScientometrics38230931910.1007/BF02457416Search in Google Scholar
Zhou, P., Thijs, B., & Glänzel, W. (2009a). Regional analysis on Chinese scientific output. Scientometrics, 81(3), 839−857.ZhouP.ThijsB.GlänzelW.2009aRegional analysis on Chinese scientific outputScientometrics81383985710.1007/s11192-008-2255-9Search in Google Scholar
Zhou, P., Thijs, B., & Glänzel, W. (2009b). Is China also becoming a giant in social sciences? Scientometrics, 79(3), 593−621.ZhouP.ThijsB.GlänzelW.2009bIs China also becoming a giant in social sciences?Scientometrics79359362110.1007/s11192-007-2068-xSearch in Google Scholar
Zhou, Y. (2005). The making of an innovative region from a centrally planned economy: Institutional evolution in Zhongguancun Science Park in Beijing. Environment and Planning A, 37(6), 1113−1134.ZhouY.2005The making of an innovative region from a centrally planned economy: Institutional evolution in Zhongguancun Science Park in BeijingEnvironment and Planning A3761113113410.1068/a3716Search in Google Scholar
Zou, Y., & Laubichler, M.D. (2017). Measuring the contributions of Chinese scholars to the research field of systems biology from 2005 to 2013. Scientometrics, 110(3), 1615−1631.ZouY.LaubichlerM.D.2017Measuring the contributions of Chinese scholars to the research field of systems biology from 2005 to 2013Scientometrics11031615163110.1007/s11192-016-2213-xSearch in Google Scholar
Identifying grey-rhino in eminent technologies via patent analysis
Peculiarities of gender disambiguation and ordering of non-English authors' names for Economic papers beyond core databases①
Articoli consigliati da Trend MD
Pianifica la tua conferenza remota con Sciendo
Sciendo fa parte della società De Gruyter
Su Sciendo
Politiche editoriali ed etiche
|
CommonCrawl
|
\begin{document}
\title{Subproduct systems of Hilbert spaces: dimension two}
\author{Boris Tsirelson}
\date{} \maketitle
\begin{abstract} A subproduct system of two-dimensional Hilbert spaces can generate an Arveson system of type $ I_1 $ only. All possible cases are classified up to isomorphism. This work is triggered by a question of Bhat: can a subproduct system of \dimensional{n} Hilbert spaces generate an Arveson system of type $ I\!I $ or $ I\!I\!I $? The question is still open for $ n>2 $. \end{abstract}
\setcounter{tocdepth}{1} \tableofcontents
\section[Discrete time]
{\raggedright Discrete time} \label{sec:1} Subproduct systems of Hilbert spaces and Hilbert modules are introduced recently by Shalit and Solel \cite{SS} and Bhat and Mukherjee \cite{BM}.
In this section, by a subproduct system I mean a discrete-time subproduct system of two-dimensional Hilbert spaces (over $ \mathbb C $), defined as follows.
\begin{definition}\label{subproduct_system} A \emph{subproduct system} consists of two-dimensional Hilbert spaces $ E_t $ for $ t = 1,2,\dots $ and isometric linear maps \[ \beta_{s,t} : E_{s+t} \to E_s \otimes E_t \] for $ s,t \in \{1,2,\dots\} $, satisfying the associativity condition: the diagram\footnote{
Of course, $ {1\hskip-2.5pt{\rm l}}_t : E_t \to E_t $ is the identity map.} \[ \xymatrix{
& E_{r+s+t} \ar[dl]_{\beta_{r+s,t}} \ar[dr]^{\beta_{r,s+t}} \\
E_{r+s} \otimes E_t \ar[dr]_{\beta_{r,s}\otimes{1\hskip-2.5pt{\rm l}}_t} & & E_r \otimes E_{s+t}
\ar[dl]^{{1\hskip-2.5pt{\rm l}}_r\otimes\beta_{s,t}} \\
& E_r \otimes E_s \otimes E_t } \] is commutative for all $ r,s,t \in \{1,2,\dots\} $. \end{definition}
\begin{proposition}\label{1.2} For every subproduct system there exist vectors $ x_t, y_t \in E_t $ for $ t = 1,2,\dots $ such that one and only one of the following five conditions is satisfied.
(1) There exists $ a \in [0,1) $ such that for all $ s,t \in \{1,2,\dots\} $ \begin{gather*}
\| x_t \| = \| y_t \| = 1 \, , \quad \ip{ x_t }{ y_t } = a^t \, , \\ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t \, , \quad \beta_{s,t} (y_{s+t}) = y_s \otimes y_t \, . \end{gather*}
(2) There exists $ a \in [0,1) $ such that for all $ s,t \in \{1,2,\dots\} $ \begin{gather*}
\| x_t \| = \| y_t \| = 1 \, , \quad \ip{ x_t }{ y_t } = a^t \, , \\ \beta_{s,t} (x_{s+t}) = \begin{cases}
x_s \otimes x_t &\text{if $ s $ is even},\\
x_s \otimes y_t &\text{if $ s $ is odd}, \end{cases} \quad \beta_{s,t} (y_{s+t}) = \begin{cases}
y_s \otimes y_t &\text{if $ s $ is even},\\
y_s \otimes x_t &\text{if $ s $ is odd}. \end{cases} \end{gather*}
(3) There exists $ \lambda \in \mathbb C \setminus \{0\} $ such that for all $ s,t \in \{1,2,\dots\} $ \begin{gather*}
\| x_t \| = 1 \, , \quad \| y_t \|^2 = 1 + |\lambda|^2 + |\lambda|^4 + \dots +
|\lambda|^{2t-2} \, , \quad \ip{ x_t }{ y_t } = 0 \, , \\ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t \, , \quad \beta_{s,t} (y_{s+t}) = y_s \otimes x_t + \lambda^s x_s \otimes y_t \, . \end{gather*}
(4) For all $ s,t \in \{1,2,\dots\} $ \begin{gather*}
\| x_t \| = \| y_t \| = 1 \, , \quad \ip{ x_t }{ y_t } = 0 \, , \\ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t \, , \quad \beta_{s,t} (y_{s+t}) = y_s \otimes x_t \, . \end{gather*}
(5) For all $ s,t \in \{1,2,\dots\} $ \begin{gather*}
\| x_t \| = \| y_t \| = 1 \, , \quad \ip{ x_t }{ y_t } = 0 \, , \\ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t \, , \quad \beta_{s,t} (y_{s+t}) = x_s \otimes y_t \, . \end{gather*} \end{proposition}
\begin{proof} The five conditions evidently are pairwise inconsistent; we have to prove that at least one of them is satisfied.
Ignoring the metric, that is, treating the Hilbert spaces $ E_t $ as just linear spaces, and using the classification given in \cite[Sect.~7]{I} we get linearly independent vectors $ x_t, y_t \in E_t $ that satisfy the formulas for $ \beta_{s,t} (x_{s+t}) $ and $ \beta_{s,t} (y_{s+t}) $ but maybe violate the formulas for $ \| x_t \| $, $ \| y_t \| $ and $ \ip{ x_t }{ y_t } $.
We postpone Case 3 and consider Cases 1, 2, 4 and 5. The right-hand sides of the formulas for $ \beta_{s,t} (x_{s+t}) $ and $ \beta_{s,t} (y_{s+t}) $ are tensor products. Taking into account that generally the relation $ x = y \otimes z \ne 0 $ implies \[
\frac{ x }{ \| x \| } = \frac{ y }{ \| y \| } \otimes \frac{ z }{ \| z \| } \, , \] we normalize $ x_t, y_t $ (dividing each vector by its norm). Thus, \[
\| x_t \| = \| y_t \| = 1 \quad \text{for all } t \, . \]
Case 1. We have $ \ip{ x_{s+t} }{ y_{s+t} } = \ip{ \beta_{s,t} (x_{s+t}) }{ \beta_{s,t} (y_{s+t}) } = \ip{ x_s \otimes x_t }{ y_s \otimes y_t } = \ip{ x_s }{ y_s } \ip{ x_t }{ y_t } $, therefore \[ \ip{ x_t }{ y_t } = a^t \quad \text{for all } t \, , \]
where $ a = \ip{ x_1 }{ y_1 } $, $ |a| < 1 $. Replacing $ y_t $ with $ {\rm e}^{{\rm i} t\varphi} y_t $ for an appropriate $ \varphi $ we get $ a \in [0,1) $.
Case 2. Denoting $ \ip{ x_t }{ y_t } $ by $ c_t $ we have \[ c_{s+t} = \begin{cases}
c_s c_t &\text{if $ s $ is even},\\
c_s \overline c_t &\text{if $ s $ is odd}, \end{cases} \]
thus $ c_{2n} = |c_1|^{2n} $, $ c_{2n+1} = c_1 |c_1|^{2n} $. Replacing $ x_{2n+1} $ with $ {\rm e}^{{\rm i} \varphi} x_{2n+1} $ and $ y_{2n+1} $ with $ {\rm e}^{-{\rm i} \varphi} y_{2n+1} $ for an appropriate $ \varphi $ we get $ c_1 \in [0,1) $.
Case 4. We have $ \ip{ x_{s+t} }{ y_{s+t} } = \ip{ x_s }{ y_s } $, thus $ \ip{ x_t }{ y_t } = \ip{ x_1 }{ y_1 } $. Replacing $ y_t $ with \[
\frac{ y_t - \ip{ y_t }{ x_t } x_t }{ \| y_t - \ip{ y_t }{ x_t } x_t \| } =
\frac{ y_t - \ip{ y_1 }{ x_1 } x_t }{ \| y_t - \ip{ y_1 }{ x_1 } x_t \| } \] we get $ \ip{ x_t }{ y_t } = 0 $.
Case 5 is similar to Case 4.
The rest of the proof treats Case 3.
First, we may prove the condition $ \| y_t \|^2 = 1 + |\lambda|^2 + \dots +
|\lambda|^{2t-2} $ for $ t=1 $ only (that is, just $ \| y_1 \| = 1 $), since the other conditions imply \[
\| y_{s+t} \|^2 = \| \beta_{s,t} (y_{s+t}) \|^2 = \| y_s \otimes x_t \|^2 + \|
\lambda^s x_s \otimes y_t \|^2 = \| y_s \|^2 + |\lambda|^{2s} \| y_t \|^2 \, . \]
Second, the condition $ \| y_1 \| = 1 $ can be ensured by dividing all $ y_t $
by $ \| y_1 \| $. Thus, we need not bother about $ \| y_t \| $.
We still can normalize vectors $ x_t $ (but not $ y_t $), since $ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t $. Thus, \[
\| x_t \| = 1 \quad \text{for all } t \, . \] Denoting $ \ip{ x_t }{ y_t } $ by $ c_t $ we have \[ c_{s+t} = c_s + \lambda^s c_t \, . \]
Sub-case 3a: $ \lambda = 1 $. We have $ c_{s+t} = c_s + c_t $, thus $ c_t = t c_1 $ for all $ t $. Replacing $ y_t $ with $ y_t - t c_1 x_t $ we get $ \ip{ x_t }{ y_t } = 0 $.
Sub-case 3b: $ \lambda \ne 1 $. We may replace $ y_t $ with $ y_t + a ( \lambda^t - 1 ) x_t $ ($ a $ will be chosen later), since \begin{multline*} \beta_{s,t} ( y_{s+t} + a ( \lambda^{s+t} - 1 ) x_{s+t} ) = \\ = y_s \otimes x_t + \lambda^s x_s \otimes y_t + a ( \lambda^{s+t} - 1 ) x_s \otimes
x_t = \\ = ( y_s + a ( \lambda^s - 1 ) x_s ) \otimes x_t + \lambda^s x_s \otimes ( y_t + a ( \lambda^t - 1 ) x_t ) \, . \end{multline*} An appropriate choice of $ a $ gives $ c_1 = 0 $ and therefore $ c_t = 0 $ for all $ t $. \end{proof}
A system of vectors $ x_t, y_t $ cannot satisfy more than one of the five conditions of Theorem \ref{1.2}, and moreover, the corresponding parameter ($ a $ or $ \lambda $, if any) is determined uniquely. We say that a system $ (x_t,y_t)_{t=1,2,\dots} $ is a \emph{basis} of type $ \mathcal E_1(a) $ if it satisfies Condition (1) of Theorem \ref{1.2} with the given parameter $ a $. Bases of types $ \mathcal E_2(a) $, $ \mathcal E_3(\lambda) $, $ \mathcal E_4 $ and $ \mathcal E_5 $ are defined similarly. By Theorem \ref{1.2}, every subproduct system has a basis. It can have many bases, but they all are of the same type, which will be shown in Lemma \ref{1.4}.
\begin{definition} An \emph{isomorphism} between two subproduct systems $ (E_t,\beta_{s,t}) $ and $ (F_t,\gamma_{s,t}) $ consists of unitary operators $ \theta_t : E_t \to F_t $ for $ t=1,2,\dots $ such that the diagram \[ \xymatrix{
E_{s+t} \ar[d]_{\theta_{s+t}} \ar[r]^{\beta_{s,t}} & E_s \otimes E_t
\ar[d]^{\theta_s \otimes \theta_t} \\
F_{s+t} \ar[r]^{\gamma_{s,t}} & F_s \otimes F_t } \] is commutative for all $ t \in \{1,2,\dots\} $. \end{definition}
Subproduct systems having a basis of type $ \mathcal E_1(a) $ for a given $ a $ evidently exist and evidently are mutually isomorphic. The same holds for the other types. Up to isomorphism we have subproduct systems \begin{xalignat*}{2} & \mathcal E_1 (a) & & \text{for } a \in [0,1) \, , \\ & \mathcal E_2 (a) & & \text{for } a \in [0,1) \, , \\ & \mathcal E_3(\lambda) & & \text{for } \lambda \in \mathbb C \setminus \{0\} \, , \\ & \mathcal E_4 \, , \\ & \mathcal E_5 \, . \end{xalignat*}
\begin{lemma}\label{1.4} The subproduct systems listed above are mutually non-isomorphic. \end{lemma}
\begin{proof} Ignoring the metric we get subproduct systems of linear spaces classified in \cite[Sect.~7]{I}: \[ \begin{array}{ccc} \text{\small subproduct system} & & \text{\small subproduct system of} \\ \text{\small of Hilbert spaces} & & \text{\small linear spaces, according to \cite{I}} \\ \\ \mathcal E_1(a) & & \mathcal E_1 \\ \mathcal E_2(a) & & \mathcal E_2 \\ \mathcal E_3(\lambda) & & \mathcal E_3(\lambda) \\ \mathcal E_4 & & \mathcal E_4 \\ \mathcal E_5 & & \mathcal E_5 \end{array} \] Isomorphism between subproduct systems of linear spaces is necessary for isomorphism between subproduct systems of Hilbert spaces. It remains to prove that $ \mathcal E_1(a) $ and $ \mathcal E_1(b) $ are isomorphic for $ a=b $ only, and the same for $ \mathcal E_2(a), \mathcal E_2(b) $.
Case $ \mathcal E_1(a) $. All product vectors in $ \beta_{1,1} (E_2) $ are of the form $ c x_1 \otimes x_1 $ or $ c y_1 \otimes y_1 $ ($ c \in \mathbb C $), see \cite[Lemma 3.2]{I}. Two one-dimensional subspaces of $ E_1 $ are thus singled out. The cosine of the angle between them is an invariant, equal to the parameter $ a $.
Case $ \mathcal E_2(a) $ is similar. \end{proof}
\begin{theorem} Every subproduct system is isomorphic to one and only one of the subproduct systems $ \mathcal E_1(a) $, $ \mathcal E_2(a) $, $ \mathcal E_3(\lambda) $, $ \mathcal E_4 $, $ \mathcal E_5 $. \end{theorem}
\begin{proof} Combine Prop.~\ref{1.2} and Lemma \ref{1.4}. \end{proof}
An automorphism of a subproduct system is its isomorphism to itself. Given a basis $ (x_t,y_t)_t $ of a subproduct system, each automorphism $ (\theta_t)_t $ transforms it into another basis $ (\theta_t x_t, \theta_t y_t)_t $, which leads to a bijective correspondence between bases and automorphisms (as long as the initial basis is kept fixed). All automorphisms are described below in terms of a given basis $ (x_t,y_t)_t $. All bases are thus described.
Every subproduct system admits automorphisms \begin{equation}\label{1.6} \theta_t = {\rm e}^{{\rm i} ct} \cdot {1\hskip-2.5pt{\rm l}}_t \end{equation} for $ c \in [0,2\pi) $, called trivial automorphisms. They commute with all automorphisms.
The systems $ \mathcal E_1(a) $ and $ \mathcal E_2(a) $ admit a nontrivial automorphism \begin{equation}\label{1.7} \theta_t x_t = y_t \, , \quad \theta_t y_t = x_t \, . \end{equation} The system $ \mathcal E_1(0) $ (that is, $ \mathcal E_1(a) $ for $ a = 0 $) admits nontrivial automorphisms \begin{equation}\label{1.8} \theta_t x_t = x_t \, , \quad \theta_t y_t = {\rm e}^{{\rm i} bt} y_t \end{equation} for $ b \in [0,2\pi) $. The system $ \mathcal E_2(0) $ admits nontrivial automorphisms \begin{equation}\label{1.9} \begin{gathered} \theta_{2n} x_{2n} = x_{2n} \, , \quad \theta_{2n} y_{2n} = y_{2n} \, , \\ \theta_{2n-1} x_{2n-1} = {\rm e}^{{\rm i} b} x_{2n-1} \, , \quad \theta_{2n-1} y_{2n-1} = {\rm e}^{-{\rm i} b} y_{2n-1} \end{gathered} \end{equation} for $ b \in [0,2\pi) $.
The systems $ \mathcal E_3(\lambda) $, $ \mathcal E_4 $ and $ \mathcal E_5 $ admit nontrivial automorphisms \begin{equation}\label{1.10} \theta_t x_t = x_t \, , \quad \theta_t y_t = {\rm e}^{{\rm i} b} y_t \end{equation} for $ b \in [0,2\pi) $.
\begin{lemma}\label{1.11} For the systems $ \mathcal E_1(a) $ and $ \mathcal E_2(a) $ with $ a \ne 0 $, every automorphism is the composition of a trivial automorphism and possibly the automorphism \eqref{1.7}.
For the system $ \mathcal E_1(0) $, every automorphism is the composition of a trivial automorphism, possibly the automorphism \eqref{1.7}, and possibly an automorphism of the form \eqref{1.8}.
For the system $ \mathcal E_2(0) $, every automorphism is the composition of a trivial automorphism, possibly the automorphism \eqref{1.7}, and possibly an automorphism of the form \eqref{1.9}.
For the systems $ \mathcal E_3(\lambda) $, $ \mathcal E_4 $ and $ \mathcal E_5 $, every automorphism is the composition of a trivial automorphism and possibly an automorphism of the form \eqref{1.10}. \end{lemma}
\begin{proof} By the definition of automorphism, $ \beta_{s,t} ( \theta_{s+t} u ) = ( \theta_s \otimes \theta_t ) ( \beta_{s,t}(u) ) $ for all $ u \in E_{s+t} $.
For $ \mathcal E_4 $ we have $ \beta_{s,t} (E_{s+t}) = E_s \otimes x_t $, therefore $ \theta_t x_t = {\rm e}^{{\rm i} c_t} x_t $ for some $ c_t $. The relation $ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t $ implies $ {\rm e}^{{\rm i} c_{s+t}} = {\rm e}^{{\rm i} c_s} {\rm e}^{{\rm i} c_t} $, since \begin{multline*} {\rm e}^{{\rm i} c_{s+t}} x_s \otimes x_t = \beta_{s,t} \( {\rm e}^{{\rm i} c_{s+t}} x_{s+t} \) = \\ = \beta_{s+t} \( \theta_{s+t} x_{s+t} \) = ( \theta_s \otimes \theta_t ) (
\beta_{s,t}(x_{s+t}) ) = \\ = ( \theta_s \otimes \theta_t ) ( x_s \otimes x_t ) = ( \theta_s x_s ) \otimes ( \theta_t x_t ) = {\rm e}^{{\rm i} c_s} x_s \otimes {\rm e}^{{\rm i} c_t} x_t \, ; \end{multline*} thus, $ \theta_t x_t = {\rm e}^{{\rm i} ct} x_t $. On the other hand, $ \theta_t y_t = {\rm e}^{{\rm i} \alpha_t } y_t $ for some $ \alpha_t $, since $ \theta_t y_t $ is a unit vector orthogonal to $ \theta_t x_t $. The relation $ \beta_{s,t} (y_{s+t}) = y_s \otimes x_t $ implies $ {\rm e}^{{\rm i} \alpha_{s+t}} = {\rm e}^{{\rm i}\alpha_s} {\rm e}^{{\rm i} ct} $; thus $ \theta_t y_t = {\rm e}^{{\rm i} b} {\rm e}^{{\rm i} ct} y_t $ where $ b = \alpha_1 - c $.
For $ \mathcal E_5 $ the proof is similar.
For $ \mathcal E_3(\lambda) $ we note that all product vectors in $ \beta_{s,t} (E_{s+t}) $ are collinear to $ x_s \otimes x_t $ (see \cite[Lemma 3.3]{I}), therefore $ \theta_t x_t = {\rm e}^{{\rm i} c_t} x_t $ for some $ c_t $. As before, the relation $ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t $ implies $ {\rm e}^{{\rm i} c_{s+t}} = {\rm e}^{{\rm i} c_s} {\rm e}^{{\rm i} c_t} $, thus $ \theta_t x_t = {\rm e}^{{\rm i} ct} x_t $. On the other hand, $ \theta_t y_t = {\rm e}^{{\rm i} \alpha_t } y_t $ for some $ \alpha_t $, since $ \theta_t y_t $ is a vector orthogonal to $ \theta_t x_t $. As before, the relation $ \beta_{s,t} (y_{s+t}) = y_s \otimes x_t + \lambda^s x_s \otimes y_t $ implies \[ {\rm e}^{{\rm i} \alpha_{s+t}} ( y_s \otimes x_t + \lambda^s x_s \otimes y_t ) = {\rm e}^{{\rm i} \alpha_s} y_s \otimes {\rm e}^{{\rm i} ct} x_t + \lambda^s {\rm e}^{{\rm i} cs} x_s \otimes {\rm e}^{{\rm i} \alpha_t} y_t \, , \] therefore $ {\rm e}^{{\rm i} \alpha_{s+t}} = {\rm e}^{{\rm i} \alpha_s} {\rm e}^{{\rm i} ct} $ and $ \theta_t y_t = {\rm e}^{{\rm i} b} {\rm e}^{{\rm i} ct} y_t $.
For $ \mathcal E_1(a) $ we recall the argument used in the proof of Lemma \ref{1.4}: two one-dimensional subspaces are singled out; $ \theta_t x_t $ must be either $ {\rm e}^{{\rm i} c_t} x_t $ or $ {\rm e}^{{\rm i} c_t} y_t $, and the same holds for $ \theta_t y_t $. The relation $ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t $ implies $ {\rm e}^{{\rm i} c_{s+t}} = {\rm e}^{{\rm i} c_s} {\rm e}^{{\rm i} c_t} $ as before, but it also shows that the choice of $ x $ or $ y $ must conform at $ s $, $ t $ and $ s+t $, which means that either $ \theta_t x_t = {\rm e}^{{\rm i} ct} x_t $ for all $ t $, or $ \theta_t x_t = {\rm e}^{{\rm i} ct} y_t $ for all $ t $. Accordingly, in the first case $ \theta_t y_t = {\rm e}^{{\rm i} dt} y_t $ for all $ t $, while in the second case $ \theta_t y_t = {\rm e}^{{\rm i} dt} x_t $ for all $ t $.
Assume for now that $ a \ne 0 $. The relation $ \ip{ x_t }{ y_t } = a^t $ gives $ {\rm e}^{{\rm i} ct} {\rm e}^{-{\rm i} dt} = 1 $ and so, $ d = c $. The first case leads to the trivial automorphism \[ \theta_t (x_t) = {\rm e}^{{\rm i} ct} x_t \, , \quad \theta_t (y_t) = {\rm e}^{{\rm i} ct} y_t \, . \] The second case leads to \[ \theta_t (x_t) = {\rm e}^{{\rm i} ct} y_t \, , \quad \theta_t (y_t) = {\rm e}^{{\rm i} ct} x_t \, , \] the composition of a trivial automorphism and \eqref{1.7}.
Assume now that $ a = 0 $. The first case leads to \[ \theta_t (x_t) = {\rm e}^{{\rm i} ct} x_t \, , \quad \theta_t (y_t) = {\rm e}^{{\rm i} dt} y_t \, , \] the composition of a trivial automorphism and \eqref{1.8}. The second case leads to \[ \theta_t (x_t) = {\rm e}^{{\rm i} ct} y_t \, , \quad \theta_t (y_t) = {\rm e}^{{\rm i} dt} x_t \, , \] the composition of a trivial automorphism, \eqref{1.7} and \eqref{1.8}.
For $ \mathcal E_2(a) $: this case is left to the reader. (It will be excluded in Sect.~\ref{sec:3}, anyway.) \end{proof}
\section[Time on a sublattice]
{\raggedright Time on a sublattice} \label{sec:2} Let $ (E_t,\beta_{s,t})_{s,t} $ be a subproduct system, and $ m \in \{1,2,\dots\} $. Restricting ourselves to $ E_m, E_{2m}, \dots $ we get another subproduct system $ (E_{mt},\beta_{ms,mt})_{s,t} $. The type of the new system is uniquely determined by the type of the original system: \begin{equation}\label{ooo} \begin{array}{ccc} \text{type of } (E_t)_t & & \text{type of } (E_{mt})_t \\ \\ \mathcal E_1(a) & & \mathcal E_1 (a^m) \\ \mathcal E_2(a) & & \begin{cases}
\mathcal E_1 (a^m)& \text{if $ m $ is even},\\
\mathcal E_2 (a^m)& \text{if $ m $ is odd} \end{cases} \\ \mathcal E_3(\lambda) & & \mathcal E_3(\lambda^m) \\ \mathcal E_4 & & \mathcal E_4 \\ \mathcal E_5 & & \mathcal E_5 \end{array} \end{equation} Here and later on I abbreviate $ (E_t,\beta_{s,t})_{s,t} $ to $ (E_t)_t $. The proof of \eqref{ooo} is straightforward: every basis $ b = (x_t,y_t)_t $ of $ (E_t)_t $ leads naturally to a basis $ R_m (b) $ of $ (E_{mt})_t $, namely, \begin{equation}\label{2.2} R_m : (x_t,y_t)_t \mapsto (x_{mt},y_{mt})_t \end{equation} when $ (E_t)_t $ is of type $ \mathcal E_1(a) $, $ \mathcal E_2(a) $, $ \mathcal E_4 $ or $ \mathcal E_5 $, and \begin{equation}\label{2.3}
R_m : (x_t,y_t)_t \mapsto \Big( x_{mt}, \frac1{\|y_m\|} y_{mt} \Big)_t \end{equation} when $ (E_t)_t $ is of type $ \mathcal E_3(\lambda) $.
Every automorphism $ \Theta = (\theta_t)_t $ of $ (E_t)_t $ leads naturally to an automorphism $ S_m(\Theta) = (\theta_{mt})_t $ of $ (E_{mt})_t $. Clearly, \[ R_m \( \Theta (b) \) = (S_m(\Theta)) \( R_m(b) \) \, , \] where automorphisms act naturally on bases: \[ \Theta (b) = ( \theta_t x_t, \theta_t y_t )_t \quad \text{for } \Theta = (\theta_t)_t \text{ and } b = (x_t,y_t)_t \, . \] Note also that \begin{equation}\label{2.4} R_{mn} (b) = R_m ( R_n(b) ) \, , \end{equation} and $ S_{mn} (\Theta) = S_m ( S_n(\Theta) ) $ for all $ m,n $, $ b $ and $ \Theta $.
The map $ S_m $ is a homomorphism from the group of automorphisms of $ (E_t)_t $ to the group of automorphisms of $ (E_{mt})_t $.
\begin{lemma} For every $ m $ the homomorphism $ S_m $ is an epimorphism. That is, every automorphism of $ (E_{mt})_t $ is of the form $ S_m(\Theta) $. \end{lemma}
\begin{proof} By Lemma \ref{1.11}, every automorphism of $ (E_{mt})_t $ is the product of automorphisms written out explicitly in \eqref{1.6}--\eqref{1.10}. It is sufficient to check that each factor is of the form $ S_m(\Theta) $. The check, left to the reader, is straightforward. (The case $ \mathcal E_2(a) $ will be excluded in Sect.~\ref{sec:3}, anyway.) \end{proof}
\begin{corollary}\label{2.6} For every $ m $ the map $ R_m $ is surjective. That is, every basis of $ (E_{mt})_t $ is of the form $ R_m(b) $ where $ b $ is a basis of $ (E_t)_t $. \end{corollary}
\begin{proof} Let $ b_0 $ be a basis of $ (E_t)_t $ and $ \Theta $ run over all automorphisms of $ (E_t)_t $, then $ S_m(\Theta) $ runs over all automorphisms of $ (E_{mt})_t $, therefore $ R_m ( \Theta(b_0) ) = (S_m(\Theta)) ( R_m(b_0) ) $ runs over all bases of $ (E_{mt})_t $. \end{proof}
\section[Rational time]
{\raggedright Rational time} \label{sec:3} In this section the object introduced by Def.~\ref{subproduct_system} will be called a \emph{discrete-time subproduct system.} A \emph{rational-time subproduct system} is defined similarly; in this case the variables $ r,s,t $ of Def.~\ref{subproduct_system} run over the set $ \mathbb Q_+ = \mathbb Q \cap (0,\infty) $ of all positive rational numbers (rather than the set $ \mathbb Z_+ = \mathbb Z \cap (0,\infty) $ of all positive integers).
\begin{theorem}\label{3.1} For every rational-time subproduct system there exist vectors $ x_t, y_t \in E_t $ for $ t \in \mathbb Q_+ $ such that one and only one of the following four conditions (numbered 1, 3, 4, 5) is satisfied.
(1) There exists $ a \in [0,1) $ such that for all $ s,t \in \mathbb Q_+ $ \begin{gather*}
\| x_t \| = \| y_t \| = 1 \, , \quad \ip{ x_t }{ y_t } = a^t \, , \\ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t \, , \quad \beta_{s,t} (y_{s+t}) = y_s \otimes y_t \, . \end{gather*}
(3) There exist $ c \in (0,\infty) $ and a family $ (\eta_t)_{t\in\mathbb Q_+} $ of numbers $ \eta_t \in \mathbb C $ such that for all $ s,t \in \mathbb Q_+ $ we have $
|\eta_t| = 1 $, $ \eta_{s+t} = \eta_s \eta_t $ and \begin{gather*}
\| x_t \| = 1 \, , \quad \ip{ x_t }{ y_t } = 0 \, , \\
\| y_t \|^2 = \begin{cases}
t &\text{if } c = 1,\\
(c^{2t}-1) / (c^2-1) &\text{if } c \ne 1;
\end{cases} \\ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t \, , \quad \beta_{s,t} (y_{s+t}) = y_s \otimes x_t + c^s \eta_s x_s \otimes y_t \, . \end{gather*}
(4) For all $ s,t \in \mathbb Q_+ $ \begin{gather*}
\| x_t \| = \| y_t \| = 1 \, , \quad \ip{ x_t }{ y_t } = 0 \, , \\ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t \, , \quad \beta_{s,t} (y_{s+t}) = y_s \otimes x_t \, . \end{gather*}
(5) For all $ s,t \in \mathbb Q_+ $ \begin{gather*}
\| x_t \| = \| y_t \| = 1 \, , \quad \ip{ x_t }{ y_t } = 0 \, , \\ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t \, , \quad \beta_{s,t} (y_{s+t}) = x_s \otimes y_t \, . \end{gather*} \end{theorem}
\begin{proof} For each $ n = 1,2,\dots $ we consider the restriction of $ (E_t)_{t\in Q_+} $ to $ t \in \{ 1/n!, 2/n!, \dots \} $, the discrete-time subproduct system $ (E_{t/n!})_{t\in\mathbb Z_+} $. The systems $ (E_{t/(n+1)!})_{t\in\mathbb Z_+} $ and $ (E_{t/n!})_{t\in\mathbb Z_+} $ are related as described in Sect.~2 (with $ m = n+1 $). The first of such systems, $ (E_t)_{t\in\mathbb Z_+} $, can be of type $ \mathcal E_1(a) $, $ \mathcal E_3(\lambda) $, $ \mathcal E_4 $ or $ \mathcal E_5 $ but not $ \mathcal E_2(a) $ (since there is no corresponding type of $ (E_{t/2})_{t\in\mathbb Z_+} $). We postpone the case $ \mathcal E_3(\lambda) $ and assume for now that $ (E_t)_{t\in\mathbb Z_+} $ is of type $ \mathcal E_1(a) $, $ \mathcal E_4 $ or $ \mathcal E_5 $. Accordingly, the $n$-th of these systems, $ (E_{t/n!})_{t\in\mathbb Z_+} $, is of type $ \mathcal E_1(a^{1/n!}) $, $ \mathcal E_4 $ or $ \mathcal E_5 $. Using Corollary \ref{2.6} we choose for each $ n=1,2,\dots $ a basis $ b_{1/n!} $ of $ (E_{t/n!})_{t\in\mathbb Z_+} $ such that $ R_{n+1} (b_{1/(n+1)!}) = b_{1/n!} $ for each $ n $.
Every $ r \in \mathbb Q_+ $ is of the form $ {k_n}/n! $ for $ n $ large enough. The basis $ b_r = R_{k_n} (b_{1/n!}) $ of $ (E_{rt})_{t\in\mathbb Z_+} $ does not depend on $ n $ due to \eqref{2.4}. For the same reason, \[ b_{mr} = R_m (b_r) \quad \text{for all } r \in \mathbb Q_+ \text{ and } m=1,2,\dots \] We introduce $ x^{(r)}_{rt}, y^{(r)}_{rt} \in E_{rt} $ for $ t \in \mathbb Z_+ $ by \[ b_r = \( x^{(r)}_{rt}, y^{(r)}_{rt} \)_{t\in\mathbb Z_+} \, . \]
Case $ \mathcal E_4 $: by \eqref{2.2}, \[ \( x^{(mr)}_{mrt}, y^{(mr)}_{mrt} \)_{t\in\mathbb Z_+} = b_{mr} = R_m (b_r) = \( x^{(r)}_{mrt}, y^{(r)}_{mrt} \)_{t\in\mathbb Z_+} \, , \]
that is, $ x^{(mr)}_{mrt} = x^{(r)}_{mrt} = x_{mrt} $ depends only on $ mrt $; vectors $ x_t \in E_t $ for $ t \in \mathbb Q_+ $ are thus defined. The same holds for $ y_t $. Clearly, $ \| x_t \| = \| y_t \| = 1 $ and $ \ip{ x_t }{ y_t } = 0 $. The relation $ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t $ for $ s,t \in \mathbb Q_+ $ follows from the relation \[ \beta_{k/n!,l/n!} \( x^{(1/n!)}_{(k+l)/n!} \) = x^{(1/n!)}_{k/n!} \otimes x^{(1/n!)}_{l/n!} \] for $ k,l \in \mathbb Z_+ $ and $ n $ large enough. The same holds for $ y $.
Case $ \mathcal E_5 $ is similar.
Case $ \mathcal E_1(a) $: as before, \eqref{2.2} leads to $ x_t,y_t \in E_t $ for $ t
\in \mathbb Q_+ $; also $ \| x_t \| = \| y_t \| = 1 $, and $ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t $, $ \beta_{s,t} (y_{s+t}) = y_s \otimes y_t $. The relation $ \ip{ x_t }{ y_t } = a^t $ follows from the relation \[ \ip{ x_{k/n!}^{(1/n!)} }{ y_{k/n!}^{(1/n!)} } = \( a^{1/n!} \)^k \] for the system $ (E_{t/n!})_{t\in\mathbb Z_+} $ of type $ \mathcal E_1(a^{1/n!}) $.
Case $ \mathcal E_3(\lambda) $: the system $ (E_{t/n!})_{t\in\mathbb Z_+} $ is of type $ \mathcal E_3(\lambda_{1/n!}) $ for some $ \lambda_{1/n!} \in \mathbb C \setminus \{0\} $ satisfying $ \lambda_{1/n!}^{n!} = \lambda $ and $ \lambda_{1/(n+1)!}^{n+1} = \lambda_{1/n!} $. We define $ \lambda_r $ for all $ r \in \mathbb Q_+ $ by $ \lambda_{k/n!} = \lambda_{1/n!}^k $ and get $
\lambda_{r+s} = \lambda_r \lambda_s $ and $ |\lambda_s| = |\lambda|^s $. By \eqref{2.3}, \[ \( x^{(mr)}_{mrt}, y^{(mr)}_{mrt} \)_{t\in\mathbb Z_+} = b_{mr} = R_m (b_r) =
\Big( x^{(r)}_{mrt}, \frac1{ \| y^{(r)}_{mr} \| } y^{(r)}_{mrt} \Big)_{t\in\mathbb Z_+} \, ; \] taking $ r = \frac1{(n+1)!} $ and $ m = n+1 $ we get for $ t \in \mathbb Z_+ $ \[
x^{(1/n!)}_{t/n!} = x^{(1/(n+1)!)}_{t/n!} \, , \quad y^{(1/n!)}_{t/n!} = \frac1{ \| y^{(1/(n+1)!)}_{1/n!} \| } y^{(1/(n+1)!)}_{t/n!} \, . \] We define $ x_r $ for all $ r \in \mathbb Q_+ $ by $ x_{t/n!} = x^{(1/n!)}_{t/n!} $ and get $ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t $ as before; however, $ y $ needs more effort. We have \[
\| y_{1/n!}^{(1/(n+1)!)} \|^2 = \| y_{(n+1)/(n+1)!}^{(1/(n+1)!)} \|^2 = 1 + c^{2/(n+1)!} + \dots + c^{2n/(n+1)!} \, , \]
where $ c = |\lambda| $.
Sub-case $ c = 1 $: we note that $ \| y_{1/n!}^{(1/(n+1)!)} \|^2 = n+1 $ implies \[ \frac1{ \sqrt{n!} } y^{(1/n!)}_{t/n!} = \frac1{ \sqrt{(n+1)!} } y^{(1/(n+1)!)}_{(n+1)t/(n+1)!} \, , \] define $ y_r $ for all $ r \in \mathbb Q_+ $ by \[ y_{t/n!} = \frac1{ \sqrt{n!} } y^{(1/n!)}_{t/n!} \] and get for $ t \in \mathbb Z_+ $ \[
\| y_{t/n!} \|^2 = \frac1{n!} \| y^{(1/n!)}_{t/n!} \|^2 = \frac{t}{n!} \, , \]
that is, $ \| y_t \|^2 = t $ for all $ t \in \mathbb Q_+ $. The relation \[ \beta_{k/n!,l/n!} \( y_{(k+l)/n!}^{(1/n!)} \) = y_{k/n!}^{(1/n!)} \otimes x_{l/n!}^{(1/n!)} + \lambda_{1/n!}^k x_{k/n!}^{(1/n!)} \otimes y_{l/n!}^{(1/n!)} \] implies $ \beta_{s,t} (y_{s+t}) = y_s \otimes x_t + \lambda_s x_s \otimes y_t $. It remains to define $ \eta_s $ by \[
\lambda_s = |\lambda_s| \eta_s = c^s \eta_s \, . \]
Sub-case $ c \ne 1 $: only the calculations related to $ \| y_t \| $ must be reconsidered. We note that \[
\| y_{1/n!}^{(1/(n+1)!)} \|^2 = \frac{ c^{2/n!} - 1 }{ c^{2/(n+1)!} - 1 } \] implies \[ \sqrt{ c^{2/n!}-1 } y^{(1/n!)}_{t/n!} = \sqrt{ c^{2/(n+1)!}-1 } y^{(1/(n+1)!)}_{(n+1)t/(n+1)!} \, , \] define $ y_r $ for all $ r \in \mathbb Q_+ $ by \[ y_{t/n!} = \sqrt{ \frac{ c^{2/n!}-1 }{ c^2-1 } } y^{(1/n!)}_{t/n!} \quad \text{for } t \in \mathbb Z_+ \] and get \[
\| y_{t/n!} \|^2 = \frac{ c^{2/n!}-1 }{ c^2-1 } \cdot \frac{ c^{2t/n!}-1 }{ c^{2/n!}-1 } = \frac{ c^{2t/n!}-1 }{ c^2-1 } \quad \text{for } t \in \mathbb Z_+ \, , \]
that is, $ \| y_t \|^2 = (c^{2t}-1)/(c^2-1) $ for all $ t \in \mathbb Q_+ $. \end{proof}
\section[Arveson systems, Liebscher continuity]
{\raggedright Arveson systems, Liebscher continuity} \label{sec:4} An Arveson system, or product system of Hilbert spaces, consists of separable Hilbert spaces $ \tilde E_t $ for $ t \in \mathbb R_+ = (0,\infty) $ and unitary operators \[ \tilde\beta_{s,t} : \tilde E_{s+t} \to \tilde E_s \otimes \tilde E_t \] for $ s,t \in \mathbb R_+ $, satisfying well-known conditions, see \cite[Def.~3.1.1]{Ar}.
\begin{definition}\label{4.1} Let $ (E_t,\beta_{s,t})_{s,t\in\mathbb Q_+} $ be a rational-time subproduct system (as defined in Sect.~\ref{sec:3}) and $ (\tilde E_t,\tilde\beta_{s,t})_{s,t\in\mathbb R_+} $ an Arveson system. A \emph{representation} of the subproduct system in the Arveson system consists of linear isometric embeddings \[ \alpha_t : E_t \to \tilde E_t \quad \text{for } t \in \mathbb Q_+ \] such that the diagram \[ \xymatrix{
E_{s+t} \ar[d]_{\alpha_{s+t}} \ar[r]^{\beta_{s,t}} & E_s \otimes E_t
\ar[d]^{\alpha_s \otimes \alpha_t} \\
\tilde E_{s+t} \ar[r]^{\tilde\beta_{s,t}} & \tilde E_s \otimes \tilde E_t } \] is commutative for all $ s,t \in \mathbb Q_+ $. \end{definition}
Denote by $ \xi_{s,t} : E_s \otimes E_t \to E_t \otimes E_s $ the unitary exchange operator, $ \xi_{s,t} ( f \otimes g ) = g \otimes f $.
\begin{proposition}\label{4.2} If $ (E_t,\beta_{s,t})_{s,t\in\mathbb Q_+} $ admits a representation (in some Arveson system) then for every $ h \in E_1 $ the function \[ t \mapsto \begin{cases}
\ip{ \beta_{t,1-t}(h) }{ \xi_{1-t,t} \beta_{1-t,t} (h) } &\text{for } t \in \mathbb Q
\cap (0,1), \\
\| h \|^2 &\text{for } t \in \{0,1\} \end{cases} \] is uniformly continuous on $ \mathbb Q \cap [0,1] $. \end{proposition}
\begin{proof} By a theorem of Liebscher \cite[Th.~7.7]{Li}, for every Arveson system $ (\tilde E_t,\tilde\beta_{s,t})_{s,t\in\mathbb R_+} $ the unitary operators $ U_t : \tilde E_1 \to \tilde E_1 $ defined for $ t \in (0,1) $ by \[ \tilde\beta_{t,1-t} ( U_t \tilde h ) = \tilde\xi_{1-t,t} \tilde\beta_{1-t,t} (\tilde h) \quad \text{for } \tilde h \in \tilde E_1 \] are a strongly continuous one-parameter unitary group (or rather its restriction to the time interval $ (0,1) $) such that $ U_1 = U_0 = {1\hskip-2.5pt{\rm l}} $; of course, $ \tilde\xi_{s,t} : \tilde E_s \otimes \tilde E_t \to \tilde E_t \otimes \tilde E_s $ is the exchange operator. It follows that for every $ \tilde h \in \tilde E_1 $ the function $ t \mapsto \ip{ U_t \tilde h }{ \tilde h } $ is (uniformly) continuous on $ [0,1] $. We may rewrite it as \[ \ip{ U_t \tilde h }{ \tilde h } = \ip{ \tilde\beta_{t,1-t} (U_t \tilde h) }{ \tilde\beta_{t,1-t} (\tilde h) } = \ip{ \tilde\xi_{1-t,t} \tilde\beta_{1-t,t} (\tilde h) }{ \tilde\beta_{t,1-t}(\tilde h) } \, . \]
Given a representation $ (\alpha_t)_t $ and a vector $ h \in E_1 $, we take $ \tilde h = \alpha_1(h) \in \tilde E_1 $ and get \begin{gather*} \tilde\beta_{t,1-t}(\tilde h) = ( \alpha_t \otimes \alpha_{1-t} ) ( \beta_{t,1-t}(h) ) \, ; \\ \tilde\xi_{1-t,t} \tilde\beta_{1-t,t} (\tilde h) = ( \alpha_t \otimes \alpha_{1-t} )
\xi_{1-t,t} \beta_{1-t,t} (h) \, ; \end{gather*} thus, \[ \ip{ U_t \tilde h }{ \tilde h } = \ip{ \xi_{1-t,t} \beta_{1-t,t} (h) }{ \beta_{t,1-t}(h) } \] for all $ t \in \mathbb Q \cap (0,1) $. Also, $ \ip{ U_1 \tilde h }{ \tilde h } = \ip{ U_0
\tilde h }{ \tilde h } = \| \tilde h \|^2 = \| h \|^2 $. \end{proof}
\begin{corollary}\label{4.3} A rational-time subproduct system satisfying Condition (4) of Theorem \ref{3.1} admits no representation. The same holds for Condition (5). \end{corollary}
\begin{proof}
We take $ h = y_1 $ and get $ \| h \|^2 = 1 $ but \[ \ip{ \beta_{t,1-t}(h) }{ \xi_{1-t,t} \beta_{1-t,t} (h) } = \ip{ y_t \otimes x_{1-t} }{ x_t \otimes y_{1-t} } = 0 \] for all $ t \in \mathbb Q \cap (0,1) $. Condition (5) is treated similarly. \end{proof}
\begin{lemma}\label{4.4} If a rational-time subproduct system satisfying Condition (3) of Theorem \ref{3.1} admits a representation then there exists $ b \in \mathbb R $ such that $ \eta_t = {\rm e}^{{\rm i} bt} $ for all $ t \in \mathbb Q_+ $. \end{lemma}
\begin{proof} It is sufficient to prove that $ \eta_t \to 1 $ as $ t \to 0 $, $ t \in \mathbb Q_+
$. We apply Prop.~\ref{4.2} to $ h = y_1 $ and get $ \| h \|^2 = 1 $ but \begin{multline*} \ip{ \beta_{t,1-t}(h) }{ \xi_{1-t,t} \beta_{1-t,t} (h) } = \\ = \ip{ y_t \otimes x_{1-t} + c^t \eta_t x_t \otimes y_{1-t} }{ x_t \otimes
y_{1-t} + c^{1-t} \eta_{1-t} y_t \otimes x_{1-t} } = \\
= c^t \eta_t \| x_t \otimes y_{1-t} \|^2 + \overline{ c^{1-t} \eta_{1-t} } \|
y_t \otimes x_{1-t} \|^2 = \\
= \eta_t ( c^t \| y_{1-t} \|^2 + c^{1-t} \overline\eta_1 \| y_t \|^2 ) \, , \end{multline*} since $ \eta_t \eta_{1-t} = \eta_1 $ and $ \overline{ \eta_{1-t} } = 1 /
\eta_{1-t} $. It remains to note that $ c^t \| y_{1-t} \|^2 + c^{1-t}
\overline\eta_1 \| y_t \|^2 \to 1 $ as $ t \to 0 $, $ t \in \mathbb Q_+ $, since $ \|
y_t \|^2 \to 0 $. \end{proof}
\begin{lemma}\label{4.45} If a rational-time subproduct system satisfying Condition (1) of Theorem \ref{3.1} admits a representation then $ a \ne 0 $. \end{lemma}
\begin{proof} We define $ \tilde h \in \tilde E_1 $ by \[ \tilde\beta_{0.5,0.5} (\tilde h) = \alpha_{0.5} (x_{0.5}) \otimes \alpha_{0.5} (y_{0.5}) \]
and observe that $ \| \tilde h \| = 1 $ but $ \ip{ U_t \tilde h }{ \tilde h } = a^{2t} $ for $ t \in \mathbb Q \cap (0,0.5) $. \end{proof}
\begin{theorem}\label{4.5} For every rational-time subproduct system admitting a representation (in some Arveson system) there exist vectors $ x_t, y_t \in E_t $ for $ t \in \mathbb Q_+ $ such that one and only one of the following two conditions (numbered 1, 3) is satisfied.
(1) There exists $ a \in (0,1) $ such that for all $ s,t \in \mathbb Q_+ $ \begin{gather*}
\| x_t \| = \| y_t \| = 1 \, , \quad \ip{ x_t }{ y_t } = a^t \, , \\ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t \, , \quad \beta_{s,t} (y_{s+t}) = y_s \otimes y_t \, . \end{gather*}
(3) There exist $ c \in (0,\infty) $ and $ b \in \mathbb R $ such that for all $ s,t \in \mathbb Q_+ $ \begin{gather*}
\| x_t \| = 1 \, , \quad \ip{ x_t }{ y_t } = 0 \, , \\
\| y_t \|^2 = \begin{cases}
t &\text{if } c = 1,\\
(c^{2t}-1) / (c^2-1) &\text{if } c \ne 1;
\end{cases} \\ \beta_{s,t} (x_{s+t}) = x_s \otimes x_t \, , \quad \beta_{s,t} (y_{s+t}) = y_s \otimes x_t + c^s {\rm e}^{{\rm i} bs} x_s \otimes y_t \, . \end{gather*} \end{theorem}
\begin{proof} Combine Theorem \ref{3.1}, Corollary \ref{4.3}, Lemma \ref{4.4} and Lemma \ref{4.45}. \end{proof}
\begin{lemma}\label{4.6} Every rational-time subproduct system satisfying Condition (1) of Theorem \ref{4.5} admits a representation in an Arveson system of type $ I_1 $. \end{lemma}
\begin{proof}
It is straightforward: in an Arveson system of type $ I_1 $ we choose two units $ (u_t)_t $, $ (v_t)_t $ such that $ \| u_t \| = 1 $, $ \| v_t \| = 1 $ and $ \ip{ u_t }{ v_t } = a^t $ for all $ t \in [0,\infty) $, and let $ \alpha_t(x_t) = u_t $, $ \alpha_t(y_t) = v_t $ for all $ t \in \mathbb Q_+ $. \end{proof}
\begin{lemma}\label{4.7} Every rational-time subproduct system satisfying Condition (3) of Theorem \ref{4.5} admits a representation in an Arveson system of type $ I_1 $. \end{lemma}
\begin{proof} We take the type $ I_1 $ Arveson system of symmetric Fock spaces \[ \tilde E_t = \bigoplus_{n=0}^\infty \( L_2(0,t) \)^{\otimes n} = \mathbb C \oplus L_2(0,t) \oplus L_2(0,t) \otimes L_2(0,t) \oplus \dots \] and map $ E_t $ into the sum of the first two terms, \begin{gather*} \alpha_t : E_t \to \mathbb C \oplus L_2(0,t) \subset \tilde E_t \, , \\ \alpha_t (x_t) = 1 \oplus 0 \, , \quad \text{(the vacuum vector)} \\ \alpha_t (y_t) = 0 \oplus f_t \, , \quad f_t \in L_2(0,t) \text{ is defined by} \\ f_t (s) = A c^s {\rm e}^{{\rm i} bs} \quad \text{for } s \in (0,t) \, , \end{gather*} $ A $ being the normalizing constant, \[ A = \begin{cases}
1 &\text{if } c=1, \\
\sqrt{ \frac{ 2\ln c }{ c^2-1 } } &\text{if } c \ne 1. \end{cases} \] \end{proof}
A representation (as defined by \ref{4.1}) will be called \emph{reducible,} if it is also a representation in a proper Arveson subsystem of the given Arveson system. Otherwise the representation is \emph{irreducible.} All irreducible representations (if any) of a given rational-time subproduct system are mutually isomorphic. In this sense a rational-time subproduct system either extends uniquely to the corresponding Arveson system, or is not embeddable into Arveson systems.
The representations constructed in Lemmas \ref{4.6}, \ref{4.7} are evidently irreducible.
\begin{theorem} If a rational-time subproduct system has an irreducible representation in an Arveson system then the Arveson system is of type $ I_1 $. \end{theorem}
\begin{proof} Combine Theorem \ref{4.5} and Lemmas \ref{4.6}, \ref{4.7}. \end{proof}
\filbreak { \small \begin{sc} \parindent=0pt\baselineskip=12pt \parbox{4in}{ Boris Tsirelson\\ School of Mathematics\\ Tel Aviv University\\ Tel Aviv 69978, Israel
\par\quad\href{mailto:[email protected]}{\tt
mailto:[email protected]} \par\quad\href{http://www.tau.ac.il/~tsirel/}{\tt
http://www.tau.ac.il/\textasciitilde tsirel/} }
\end{sc} } \filbreak
\end{document}
|
arXiv
|
\begin{definition}[Definition:Square Root/Complex Number/Definition 1]
Let $z \in \C$ be a complex number expressed in polar form as $\left \langle{r, \theta}\right\rangle = r \left({\cos \theta + i \sin \theta}\right)$.
The '''square root of $z$''' is the $2$-valued multifunction:
{{begin-eqn}}
{{eqn | l = z^{1/2}
| r = \left\{ {\sqrt r \left({\cos \left({\frac {\theta + 2 k \pi} 2}\right) + i \sin \left({\frac {\theta + 2 k \pi} 2}\right) }\right): k \in \left\{ {0, 1}\right\} }\right\}
| c =
}}
{{eqn | r = \left\{ {\sqrt r \left({\cos \left({\frac \theta 2 + k \pi}\right) + i \sin \left({\frac \theta 2 + k \pi}\right) }\right): k \in \left\{ {0, 1}\right\} }\right\}
| c =
}}
{{end-eqn}}
where $\sqrt r$ denotes the positive square root of $r$.
\end{definition}
|
ProofWiki
|
Quintic function
In mathematics, a quintic function is a function of the form
$g(x)=ax^{5}+bx^{4}+cx^{3}+dx^{2}+ex+f,\,$
where a, b, c, d, e and f are members of a field, typically the rational numbers, the real numbers or the complex numbers, and a is nonzero. In other words, a quintic function is defined by a polynomial of degree five.
Because they have an odd degree, normal quintic functions appear similar to normal cubic functions when graphed, except they may possess one additional local maximum and one additional local minimum. The derivative of a quintic function is a quartic function.
Setting g(x) = 0 and assuming a ≠ 0 produces a quintic equation of the form:
$ax^{5}+bx^{4}+cx^{3}+dx^{2}+ex+f=0.\,$
Solving quintic equations in terms of radicals (nth roots) was a major problem in algebra from the 16th century, when cubic and quartic equations were solved, until the first half of the 19th century, when the impossibility of such a general solution was proved with the Abel–Ruffini theorem.
Finding roots of a quintic equation
Finding the roots (zeros) of a given polynomial has been a prominent mathematical problem.
Solving linear, quadratic, cubic and quartic equations by factorization into radicals can always be done, no matter whether the roots are rational or irrational, real or complex; there are formulae that yield the required solutions. However, there is no algebraic expression (that is, in terms of radicals) for the solutions of general quintic equations over the rationals; this statement is known as the Abel–Ruffini theorem, first asserted in 1799 and completely proved in 1824. This result also holds for equations of higher degree. An example of a quintic whose roots cannot be expressed in terms of radicals is x5 − x + 1 = 0.
Some quintics may be solved in terms of radicals. However, the solution is generally too complicated to be used in practice. Instead, numerical approximations are calculated using a root-finding algorithm for polynomials.
Solvable quintics
Some quintic equations can be solved in terms of radicals. These include the quintic equations defined by a polynomial that is reducible, such as x5 − x4 − x + 1 = (x2 + 1)(x + 1)(x − 1)2. For example, it has been shown[1] that
$x^{5}-x-r=0$
has solutions in radicals if and only if it has an integer solution or r is one of ±15, ±22440, or ±2759640, in which cases the polynomial is reducible.
As solving reducible quintic equations reduces immediately to solving polynomials of lower degree, only irreducible quintic equations are considered in the remainder of this section, and the term "quintic" will refer only to irreducible quintics. A solvable quintic is thus an irreducible quintic polynomial whose roots may be expressed in terms of radicals.
To characterize solvable quintics, and more generally solvable polynomials of higher degree, Évariste Galois developed techniques which gave rise to group theory and Galois theory. Applying these techniques, Arthur Cayley found a general criterion for determining whether any given quintic is solvable.[2] This criterion is the following.[3]
Given the equation
$ax^{5}+bx^{4}+cx^{3}+dx^{2}+ex+f=0,$
the Tschirnhaus transformation x = y − b/5a, which depresses the quintic (that is, removes the term of degree four), gives the equation
$y^{5}+py^{3}+qy^{2}+ry+s=0$,
where
${\begin{aligned}p&={\frac {5ac-2b^{2}}{5a^{2}}}\\q&={\frac {25a^{2}d-15abc+4b^{3}}{25a^{3}}}\\r&={\frac {125a^{3}e-50a^{2}bd+15ab^{2}c-3b^{4}}{125a^{4}}}\\s&={\frac {3125a^{4}f-625a^{3}be+125a^{2}b^{2}d-25ab^{3}c+4b^{5}}{3125a^{5}}}\end{aligned}}$
Both quintics are solvable by radicals if and only if either they are factorisable in equations of lower degrees with rational coefficients or the polynomial P2 − 1024 z Δ, named Cayley's resolvent, has a rational root in z, where
${\begin{aligned}P={}&z^{3}-z^{2}(20r+3p^{2})-z(8p^{2}r-16pq^{2}-240r^{2}+400sq-3p^{4})\\&-p^{6}+28p^{4}r-16p^{3}q^{2}-176p^{2}r^{2}-80p^{2}sq+224prq^{2}-64q^{4}\\&+4000ps^{2}+320r^{3}-1600rsq\end{aligned}}$
and
${\begin{aligned}\Delta ={}&-128p^{2}r^{4}+3125s^{4}-72p^{4}qrs+560p^{2}qr^{2}s+16p^{4}r^{3}+256r^{5}+108p^{5}s^{2}\\&-1600qr^{3}s+144pq^{2}r^{3}-900p^{3}rs^{2}+2000pr^{2}s^{2}-3750pqs^{3}+825p^{2}q^{2}s^{2}\\&+2250q^{2}rs^{2}+108q^{5}s-27q^{4}r^{2}-630pq^{3}rs+16p^{3}q^{3}s-4p^{3}q^{2}r^{2}.\end{aligned}}$
Cayley's result allows us to test if a quintic is solvable. If it is the case, finding its roots is a more difficult problem, which consists of expressing the roots in terms of radicals involving the coefficients of the quintic and the rational root of Cayley's resolvent.
In 1888, George Paxton Young described how to solve a solvable quintic equation, without providing an explicit formula;[4] in 2004, Daniel Lazard wrote out a three-page formula.[5]
Quintics in Bring–Jerrard form
There are several parametric representations of solvable quintics of the form x5 + ax + b = 0, called the Bring–Jerrard form.
During the second half of the 19th century, John Stuart Glashan, George Paxton Young, and Carl Runge gave such a parameterization: an irreducible quintic with rational coefficients in Bring–Jerrard form is solvable if and only if either a = 0 or it may be written
$x^{5}+{\frac {5\mu ^{4}(4\nu +3)}{\nu ^{2}+1}}x+{\frac {4\mu ^{5}(2\nu +1)(4\nu +3)}{\nu ^{2}+1}}=0$
where μ and ν are rational.
In 1994, Blair Spearman and Kenneth S. Williams gave an alternative,
$x^{5}+{\frac {5e^{4}(4c+3)}{c^{2}+1}}x+{\frac {-4e^{5}(2c-11)}{c^{2}+1}}=0.$
The relationship between the 1885 and 1994 parameterizations can be seen by defining the expression
$b={\frac {4}{5}}\left(a+20\pm 2{\sqrt {(20-a)(5+a)}}\right)$
where a = 5(4ν + 3)/ν2 + 1. Using the negative case of the square root yields, after scaling variables, the first parametrization while the positive case gives the second.
The substitution c = −m/l5, e = 1/l in the Spearman-Williams parameterization allows one to not exclude the special case a = 0, giving the following result:
If a and b are rational numbers, the equation x5 + ax + b = 0 is solvable by radicals if either its left-hand side is a product of polynomials of degree less than 5 with rational coefficients or there exist two rational numbers l and m such that
$a={\frac {5l(3l^{5}-4m)}{m^{2}+l^{10}}}\qquad b={\frac {4(11l^{5}+2m)}{m^{2}+l^{10}}}.$
Roots of a solvable quintic
A polynomial equation is solvable by radicals if its Galois group is a solvable group. In the case of irreducible quintics, the Galois group is a subgroup of the symmetric group S5 of all permutations of a five element set, which is solvable if and only if it is a subgroup of the group F5, of order 20, generated by the cyclic permutations (1 2 3 4 5) and (1 2 4 3).
If the quintic is solvable, one of the solutions may be represented by an algebraic expression involving a fifth root and at most two square roots, generally nested. The other solutions may then be obtained either by changing the fifth root or by multiplying all the occurrences of the fifth root by the same power of a primitive 5th root of unity, such as
${\frac {{\sqrt {-10-2{\sqrt {5}}}}+{\sqrt {5}}-1}{4}}.$
In fact, all four primitive fifth roots of unity may be obtained by changing the signs of the square roots appropriately; namely, the expression
${\frac {\alpha {\sqrt {-10-2\beta {\sqrt {5}}}}+\beta {\sqrt {5}}-1}{4}},$
where $\alpha ,\beta \in \{-1,1\}$, yields the four distinct primitive fifth roots of unity.
It follows that one may need four different square roots for writing all the roots of a solvable quintic. Even for the first root that involves at most two square roots, the expression of the solutions in terms of radicals is usually highly complicated. However, when no square root is needed, the form of the first solution may be rather simple, as for the equation x5 − 5x4 + 30x3 − 50x2 + 55x − 21 = 0, for which the only real solution is
$x=1+{\sqrt[{5}]{2}}-\left({\sqrt[{5}]{2}}\right)^{2}+\left({\sqrt[{5}]{2}}\right)^{3}-\left({\sqrt[{5}]{2}}\right)^{4}.$
An example of a more complicated (although small enough to be written here) solution is the unique real root of x5 − 5x + 12 = 0. Let a = √2φ−1, b = √2φ, and c = 4√5, where φ = 1+√5/2 is the golden ratio. Then the only real solution x = −1.84208... is given by
$-cx={\sqrt[{5}]{(a+c)^{2}(b-c)}}+{\sqrt[{5}]{(-a+c)(b-c)^{2}}}+{\sqrt[{5}]{(a+c)(b+c)^{2}}}-{\sqrt[{5}]{(-a+c)^{2}(b+c)}}\,,$
or, equivalently, by
$x={\sqrt[{5}]{y_{1}}}+{\sqrt[{5}]{y_{2}}}+{\sqrt[{5}]{y_{3}}}+{\sqrt[{5}]{y_{4}}}\,,$
where the yi are the four roots of the quartic equation
$y^{4}+4y^{3}+{\frac {4}{5}}y^{2}-{\frac {8}{5^{3}}}y-{\frac {1}{5^{5}}}=0\,.$
More generally, if an equation P(x) = 0 of prime degree p with rational coefficients is solvable in radicals, then one can define an auxiliary equation Q(y) = 0 of degree p – 1, also with rational coefficients, such that each root of P is the sum of p-th roots of the roots of Q. These p-th roots were introduced by Joseph-Louis Lagrange, and their products by p are commonly called Lagrange resolvents. The computation of Q and its roots can be used to solve P(x) = 0. However these p-th roots may not be computed independently (this would provide pp–1 roots instead of p). Thus a correct solution needs to express all these p-roots in term of one of them. Galois theory shows that this is always theoretically possible, even if the resulting formula may be too large to be of any use.
It is possible that some of the roots of Q are rational (as in the first example of this section) or some are zero. In these cases, the formula for the roots is much simpler, as for the solvable de Moivre quintic
$x^{5}+5ax^{3}+5a^{2}x+b=0\,,$
where the auxiliary equation has two zero roots and reduces, by factoring them out, to the quadratic equation
$y^{2}+by-a^{5}=0\,,$
such that the five roots of the de Moivre quintic are given by
$x_{k}=\omega ^{k}{\sqrt[{5}]{y_{i}}}-{\frac {a}{\omega ^{k}{\sqrt[{5}]{y_{i}}}}},$
where yi is any root of the auxiliary quadratic equation and ω is any of the four primitive 5th roots of unity. This can be easily generalized to construct a solvable septic and other odd degrees, not necessarily prime.
Other solvable quintics
There are infinitely many solvable quintics in Bring-Jerrard form which have been parameterized in a preceding section.
Up to the scaling of the variable, there are exactly five solvable quintics of the shape $x^{5}+ax^{2}+b$, which are[6] (where s is a scaling factor):
$x^{5}-2s^{3}x^{2}-{\frac {s^{5}}{5}}$
$x^{5}-100s^{3}x^{2}-1000s^{5}$
$x^{5}-5s^{3}x^{2}-3s^{5}$
$x^{5}-5s^{3}x^{2}+15s^{5}$
$x^{5}-25s^{3}x^{2}-300s^{5}$
Paxton Young (1888) gave a number of examples of solvable quintics:
$x^{5}-10x^{3}-20x^{2}-1505x-7412$
$x^{5}+{\frac {625}{4}}x+3750$
$x^{5}-{\frac {22}{5}}x^{3}-{\frac {11}{25}}x^{2}+{\frac {462}{125}}x+{\frac {979}{3125}}$
$x^{5}+20x^{3}+20x^{2}+30x+10$$~\qquad ~$ Root: ${\sqrt[{5}]{2}}-{\sqrt[{5}]{2}}^{2}+{\sqrt[{5}]{2}}^{3}-{\sqrt[{5}]{2}}^{4}$
$x^{5}-20x^{3}+250x-400$
$x^{5}-5x^{3}+{\frac {85}{8}}x-{\frac {13}{2}}$
$x^{5}+{\frac {20}{17}}x+{\frac {21}{17}}$
$x^{5}-{\frac {4}{13}}x+{\frac {29}{65}}$
$x^{5}+{\frac {10}{13}}x+{\frac {3}{13}}$
$x^{5}+110(5x^{3}+60x^{2}+800x+8320)$
$x^{5}-20x^{3}-80x^{2}-150x-656$
$x^{5}-40x^{3}+160x^{2}+1000x-5888$
$x^{5}-50x^{3}-600x^{2}-2000x-11200$
$x^{5}+110(5x^{3}+20x^{2}-360x+800)$
$x^{5}-20x^{3}+170x+208$
An infinite sequence of solvable quintics may be constructed, whose roots are sums of nth roots of unity, with n = 10k + 1 being a prime number:
$x^{5}+x^{4}-4x^{3}-3x^{2}+3x+1$Roots: $2\cos \left({\frac {2k\pi }{11}}\right)$
$x^{5}+x^{4}-12x^{3}-21x^{2}+x+5$Root: $\sum _{k=0}^{5}e^{\frac {2i\pi 6^{k}}{31}}$
$x^{5}+x^{4}-16x^{3}+5x^{2}+21x-9$Root: $\sum _{k=0}^{7}e^{\frac {2i\pi 3^{k}}{41}}$
$x^{5}+x^{4}-24x^{3}-17x^{2}+41x-13$$~\qquad ~$Root: $\sum _{k=0}^{11}e^{\frac {2i\pi (21)^{k}}{61}}$
$x^{5}+x^{4}-28x^{3}+37x^{2}+25x+1$Root: $\sum _{k=0}^{13}e^{\frac {2i\pi (23)^{k}}{71}}$
There are also two parameterized families of solvable quintics: The Kondo–Brumer quintic,
$x^{5}+(a-3)\,x^{4}+(-a+b+3)\,x^{3}+(a^{2}-a-1-2b)\,x^{2}+b\,x+a=0$
and the family depending on the parameters $a,\ell ,m$
$x^{5}-5\,p\left(2\,x^{3}+a\,x^{2}+b\,x\right)-p\,c=0$
where
$p={\tfrac {1}{4}}\left[\,\ell ^{2}(4m^{2}+a^{2})-m^{2}\,\right]\;,$
$b=\ell \,(4m^{2}+a^{2})-5p-2m^{2}\;,$
$c={\tfrac {1}{2}}\left[\,b(a+4m)-p(a-4m)-a^{2}m\,\right]\;.$
Casus irreducibilis
Analogously to cubic equations, there are solvable quintics which have five real roots all of whose solutions in radicals involve roots of complex numbers. This is casus irreducibilis for the quintic, which is discussed in Dummit.[7]: p.17 Indeed, if an irreducible quintic has all roots real, no root can be expressed purely in terms of real radicals (as is true for all polynomial degrees that are not powers of 2).
Beyond radicals
About 1835, Jerrard demonstrated that quintics can be solved by using ultraradicals (also known as Bring radicals), the unique real root of t5 + t − a = 0 for real numbers a. In 1858 Charles Hermite showed that the Bring radical could be characterized in terms of the Jacobi theta functions and their associated elliptic modular functions, using an approach similar to the more familiar approach of solving cubic equations by means of trigonometric functions. At around the same time, Leopold Kronecker, using group theory, developed a simpler way of deriving Hermite's result, as had Francesco Brioschi. Later, Felix Klein came up with a method that relates the symmetries of the icosahedron, Galois theory, and the elliptic modular functions that are featured in Hermite's solution, giving an explanation for why they should appear at all, and developed his own solution in terms of generalized hypergeometric functions.[8] Similar phenomena occur in degree 7 (septic equations) and 11, as studied by Klein and discussed in Icosahedral symmetry § Related geometries.
Solving with Bring radicals
Main article: Bring radical
A Tschirnhaus transformation, which may be computed by solving a quartic equation, reduces the general quintic equation of the form
$x^{5}+a_{4}x^{4}+a_{3}x^{3}+a_{2}x^{2}+a_{1}x+a_{0}=0\,$
to the Bring–Jerrard normal form x5 − x + t = 0.
The roots of this equation cannot be expressed by radicals. However, in 1858, Charles Hermite published the first known solution of this equation in terms of elliptic functions.[9] At around the same time Francesco Brioschi[10] and Leopold Kronecker[11] came upon equivalent solutions.
See Bring radical for details on these solutions and some related ones.
Application to celestial mechanics
Solving for the locations of the Lagrangian points of an astronomical orbit in which the masses of both objects are non-negligible involves solving a quintic.
More precisely, the locations of L2 and L1 are the solutions to the following equations, where the gravitational forces of two masses on a third (for example, Sun and Earth on satellites such as Gaia and the James Webb Space Telescope at L2 and SOHO at L1) provide the satellite's centripetal force necessary to be in a synchronous orbit with Earth around the Sun:
${\frac {GmM_{S}}{(R\pm r)^{2}}}\pm {\frac {GmM_{E}}{r^{2}}}=m\omega ^{2}(R\pm r)$
The ± sign corresponds to L2 and L1, respectively; G is the gravitational constant, ω the angular velocity, r the distance of the satellite to Earth, R the distance Sun to Earth (that is, the semi-major axis of Earth's orbit), and m, ME, and MS are the respective masses of satellite, Earth, and Sun.
Using Kepler's Third Law $\omega ^{2}={\frac {4\pi ^{2}}{P^{2}}}={\frac {G(M_{S}+M_{E})}{R^{3}}}$ and rearranging all terms yields the quintic
$ar^{5}+br^{4}+cr^{3}+dr^{2}+er+f=0$
with:
${\begin{aligned}&a=\pm (M_{S}+M_{E}),\\&b=+(M_{S}+M_{E})3R,\\&c=\pm (M_{S}+M_{E})3R^{2},\\&d=+(M_{E}\mp M_{E})R^{3}\ (thus\ d=0\ for\ L_{2}),\\&e=\pm M_{E}2R^{4},\\&f=\mp M_{E}R^{5}\end{aligned}}$ .
Solving these two quintics yields r = 1.501 x 109 m for L2 and r = 1.491 x 109 m for L1. The Sun–Earth Lagrangian points L2 and L1 are usually given as 1.5 million km from Earth.
If the mass of the smaller object (ME) is much smaller than the mass of the larger object (MS), then the quintic equation can be greatly reduced and L1 and L2 are at approximately the radius of the Hill sphere, given by:
$r\approx R{\sqrt[{3}]{\frac {M_{E}}{3M_{S}}}}$
That also yields r = 1.5 x 109 m for satellites at L1 and L2 in the Sun-Earth system.
See also
• Sextic equation
• Septic function
• Theory of equations
Notes
1. Elia, M.; Filipponi, P. (1998). "Equations of the Bring-Jerrard Form, the Golden Section, and Square Fibonacci Numbers" (PDF). The Fibonacci Quarterly. 36 (3): 282–286.
2. A. Cayley, "On a new auxiliary equation in the theory of equation of the fifth order", Philosophical Transactions of the Royal Society of London 151:263-276 (1861) doi:10.1098/rstl.1861.0014
3. This formulation of Cayley's result is extracted from Lazard (2004) paper.
4. George Paxton Young, "Solvable Quintic Equations with Commensurable Coefficients", American Journal of Mathematics 10:99–130 (1888), JSTOR 2369502
5. Lazard (2004, p. 207)
6. Elkies, Noam. "Trinomials a xn + b x + c with interesting Galois groups". Harvard University.
7. David S. Dummit Solving Solvable Quintics
8. (Klein 1888); a modern exposition is given in (Tóth 2002, Section 1.6, Additional Topic: Klein's Theory of the Icosahedron, p. 66)
9. Hermite, Charles (1858). "Sur la résolution de l'équation du cinquième degré". Comptes Rendus de l'Académie des Sciences. XLVI (I): 508–515.
10. Brioschi, Francesco (1858). "Sul Metodo di Kronecker per la Risoluzione delle Equazioni di Quinto Grado". Atti Dell'i. R. Istituto Lombardo di Scienze, Lettere ed Arti. I: 275–282.
11. Kronecker, Leopold (1858). "Sur la résolution de l'equation du cinquième degré, extrait d'une lettre adressée à M. Hermite". Comptes Rendus de l'Académie des Sciences. XLVI (I): 1150–1152.
References
• Charles Hermite, "Sur la résolution de l'équation du cinquème degré", Œuvres de Charles Hermite, 2:5–21, Gauthier-Villars, 1908.
• Klein, Felix (1888). Lectures on the Icosahedron and the Solution of Equations of the Fifth Degree. Translated by Morrice, George Gavin. Trübner & Co. ISBN 0-486-49528-0.
• Leopold Kronecker, "Sur la résolution de l'equation du cinquième degré, extrait d'une lettre adressée à M. Hermite", Comptes Rendus de l'Académie des Sciences, 46:1:1150–1152 1858.
• Blair Spearman and Kenneth S. Williams, "Characterization of solvable quintics x5 + ax + b, American Mathematical Monthly, 101:986–992 (1994).
• Ian Stewart, Galois Theory 2nd Edition, Chapman and Hall, 1989. ISBN 0-412-34550-1. Discusses Galois Theory in general including a proof of insolvability of the general quintic.
• Jörg Bewersdorff, Galois theory for beginners: A historical perspective, American Mathematical Society, 2006. ISBN 0-8218-3817-2. Chapter 8 (The solution of equations of the fifth degree at the Wayback Machine (archived 31 March 2010)) gives a description of the solution of solvable quintics x5 + cx + d.
• Victor S. Adamchik and David J. Jeffrey, "Polynomial transformations of Tschirnhaus, Bring and Jerrard," ACM SIGSAM Bulletin, Vol. 37, No. 3, September 2003, pp. 90–94.
• Ehrenfried Walter von Tschirnhaus, "A method for removing all intermediate terms from a given equation," ACM SIGSAM Bulletin, Vol. 37, No. 1, March 2003, pp. 1–3.
• Lazard, Daniel (2004). "Solving quintics in radicals". In Olav Arnfinn Laudal; Ragni Piene (eds.). The Legacy of Niels Henrik Abel. Berlin. pp. 207–225. ISBN 3-540-43826-2. Archived from the original on January 6, 2005.{{cite book}}: CS1 maint: location missing publisher (link)
• Tóth, Gábor (2002), Finite Möbius groups, minimal immersions of spheres, and moduli
External links
• Mathworld - Quintic Equation – more details on methods for solving Quintics.
• Solving Solvable Quintics – a method for solving solvable quintics due to David S. Dummit.
• A method for removing all intermediate terms from a given equation - a recent English translation of Tschirnhaus' 1683 paper.
Polynomials and polynomial functions
By degree
• Zero polynomial (degree undefined or −1 or −∞)
• Constant function (0)
• Linear function (1)
• Linear equation
• Quadratic function (2)
• Quadratic equation
• Cubic function (3)
• Cubic equation
• Quartic function (4)
• Quartic equation
• Quintic function (5)
• Sextic equation (6)
• Septic equation (7)
By properties
• Univariate
• Bivariate
• Multivariate
• Monomial
• Binomial
• Trinomial
• Irreducible
• Square-free
• Homogeneous
• Quasi-homogeneous
Tools and algorithms
• Factorization
• Greatest common divisor
• Division
• Horner's method of evaluation
• Resultant
• Discriminant
• Gröbner basis
|
Wikipedia
|
Semiprime ring
In ring theory, a branch of mathematics, semiprime ideals and semiprime rings are generalizations of prime ideals and prime rings. In commutative algebra, semiprime ideals are also called radical ideals and semiprime rings are the same as reduced rings.
For example, in the ring of integers, the semiprime ideals are the zero ideal, along with those ideals of the form $n\mathbb {Z} $ where n is a square-free integer. So, $30\mathbb {Z} $ is a semiprime ideal of the integers (because 30 = 2 × 3 × 5, with no repeated prime factors), but $12\mathbb {Z} \,$ is not (because 12 = 22 × 3, with a repeated prime factor).
The class of semiprime rings includes semiprimitive rings, prime rings and reduced rings.
Most definitions and assertions in this article appear in (Lam 1999) and (Lam 2001).
Definitions
For a commutative ring R, a proper ideal A is a semiprime ideal if A satisfies either of the following equivalent conditions:
• If xk is in A for some positive integer k and element x of R, then x is in A.
• If y is in R but not in A, all positive integer powers of y are not in A.
The latter condition that the complement is "closed under powers" is analogous to the fact that complements of prime ideals are closed under multiplication.
As with prime ideals, this is extended to noncommutative rings "ideal-wise". The following conditions are equivalent definitions for a semiprime ideal A in a ring R:
• For any ideal J of R, if Jk⊆A for a positive natural number k, then J⊆A.
• For any right ideal J of R, if Jk⊆A for a positive natural number k, then J⊆A.
• For any left ideal J of R, if Jk⊆A for a positive natural number k, then J⊆A.
• For any x in R, if xRx⊆A, then x is in A.
Here again, there is a noncommutative analogue of prime ideals as complements of m-systems. A nonempty subset S of a ring R is called an n-system if for any s in S, there exists an r in R such that srs is in S. With this notion, an additional equivalent point may be added to the above list:
• R\A is an n-system.
The ring R is called a semiprime ring if the zero ideal is a semiprime ideal. In the commutative case, this is equivalent to R being a reduced ring, since R has no nonzero nilpotent elements. In the noncommutative case, the ring merely has no nonzero nilpotent right ideals. So while a reduced ring is always semiprime, the converse is not true.[1]
General properties of semiprime ideals
To begin with, it is clear that prime ideals are semiprime, and that for commutative rings, a semiprime primary ideal is prime.
While the intersection of prime ideals is not usually prime, it is a semiprime ideal. Shortly it will be shown that the converse is also true, that every semiprime ideal is the intersection of a family of prime ideals.
For any ideal B in a ring R, we can form the following sets:
${\sqrt {B}}:=\bigcap \{P\subseteq R\mid B\subseteq P,P{\mbox{ a prime ideal}}\}\subseteq \{x\in R\mid x^{n}\in B{\mbox{ for some }}n\in \mathbb {N} ^{+}\}\,$
The set ${\sqrt {B}}$ is the definition of the radical of B and is clearly a semiprime ideal containing B, and in fact is the smallest semiprime ideal containing B. The inclusion above is sometimes proper in the general case, but for commutative rings it becomes an equality.
With this definition, an ideal A is semiprime if and only if ${\sqrt {A}}=A$. At this point, it is also apparent that every semiprime ideal is in fact the intersection of a family of prime ideals. Moreover, this shows that the intersection of any two semiprime ideals is again semiprime.
By definition R is semiprime if and only if ${\sqrt {\{0\}}}=\{0\}$, that is, the intersection of all prime ideals is zero. This ideal ${\sqrt {\{0\}}}$ is also denoted by $Nil_{*}(R)\,$ and also called Baer's lower nilradical or the Baer-Mccoy radical or the prime radical of R.
Semiprime Goldie rings
Main article: Goldie's theorem
A right Goldie ring is a ring that has finite uniform dimension (also called finite rank) as a right module over itself, and satisfies the ascending chain condition on right annihilators of its subsets. Goldie's theorem states that the semiprime right Goldie rings are precisely those that have a semisimple Artinian right classical ring of quotients. The Artin–Wedderburn theorem then completely determines the structure of this ring of quotients.
References
1. The full ring of two-by-two matrices over a field is semiprime with nonzero nilpotent elements.
• Lam, Tsit-Yuen (1999), Lectures on modules and rings, Graduate Texts in Mathematics No. 189, Berlin, New York: Springer-Verlag, ISBN 978-0-387-98428-5, MR 1653294
• Lam, T. Y. (2001), A first course in noncommutative rings, Graduate Texts in Mathematics, vol. 131 (2 ed.), New York: Springer-Verlag, pp. xx+385, ISBN 978-0-387-95183-6, MR 1838439
External links
• PlanetMath article on semiprime ideals
|
Wikipedia
|
John Edwin Luecke
John Edwin Luecke is an American mathematician who works in topology and knot theory. He got his Ph.D. in 1985 from the University of Texas at Austin and is now a professor in the department of mathematics at that institution.
John Edwin Luecke
NationalityAmerican
Alma materUniversity of Texas at Austin
Known forGordon–Luecke theorem
Scientific career
Fieldstopology, knot theory
Doctoral advisorCameron McAllan Gordon
Work
Luecke specializes in knot theory and 3-manifolds. In a 1987 paper[1] Luecke, Marc Culler, Cameron Gordon, and Peter Shalen proved the cyclic surgery theorem. In a 1989 paper[2] Luecke and Cameron Gordon proved that knots are determined by their complements, a result now known as the Gordon–Luecke theorem.
Dr Luecke received a NSF Presidential Young Investigator Award[3][4] in 1992 and Sloan Foundation fellow[5] in 1994. In 2012 he became a fellow of the American Mathematical Society.[6]
References
1. M. Culler, C. Gordon, J. Luecke, P. Shalen (1987). Dehn surgery on knots. The Annals of Mathematics (Annals of Mathematics) 125 (2): 237-300.
2. Cameron Gordon and John Luecke, Knots are determined by their complements. J. Amer. Math. Soc. 2 (1989), no. 2, 371–415.
3. The University of Texas at Austin; Faculty profile Archived 2011-09-27 at the Wayback Machine
4. NSF Presidential and Honorary Awards
5. Sloan Research Fellowships
6. List of Fellows of the American Mathematical Society, retrieved 2013-02-02.
External links
• John Edwin Luecke at the Mathematics Genealogy Project
• Luecke's home page at the University of Texas at Austin
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
|
Wikipedia
|
Only show content I have access to (144)
Only show open access (20)
Last 3 months (2)
Last 12 months (9)
Last 3 years (39)
Over 3 years (620)
Physics and Astronomy (197)
Materials Research (131)
Earth and Environmental Sciences (62)
Statistics and Probability (41)
Politics and International Relations (30)
Area Studies (12)
MRS Online Proceedings Library Archive (115)
Journal of Fluid Mechanics (30)
Econometric Theory (26)
Antiquity (25)
Psychological Medicine (19)
Animal Science (16)
Proceedings of the International Astronomical Union (13)
British Journal of Nutrition (12)
The Antiquaries Journal (12)
Proceedings of the British Society of Animal Science (11)
Publications of the Astronomical Society of Australia (11)
The British Journal of Psychiatry (11)
The Journal of Laryngology & Otology (11)
Geological Magazine (10)
Microscopy and Microanalysis (10)
Journal of Materials Research (9)
Epidemiology & Infection (8)
American Political Science Review (7)
Journal of the Marine Biological Association of the United Kingdom (7)
Weed Technology (7)
Boydell & Brewer (9)
ISEAS–Yusof Ishak Institute (2)
Materials Research Society (131)
International Astronomical Union (29)
BSAS (22)
Nutrition Society (16)
Ryan Test (13)
Society of Antiquaries of London (13)
Weed Science Society of America (10)
Nestle Foundation - enLINK (9)
Royal College of Speech and Language Therapists (9)
test society (9)
Test Society 2018-05-10 (8)
MBA Online Only Members (7)
Royal College of Psychiatrists / RCPsych (7)
The Royal College of Psychiatrists (7)
AMMS - Australian Microscopy and Microanalysis Society (6)
The Prehistoric Society (6)
AEPC Association of European Paediatric Cardiology (5)
Canadian Neurological Sciences Federation (5)
Mineralogical Society (5)
The Association for Asian Studies (5)
The Roman Society - JRS and BRI (5)
Cambridge Studies in International Relations (16)
Cambridge Child and Adolescent Psychiatry (2)
Cambridge Handbooks in Behavioral Genetics (2)
Econometric Exercises (2)
Econometric Society Monographs (2)
Social Issues in Southeast Asia (2)
Cambridge Companions to Management (1)
Cambridge Contemporary Astrophysics (1)
Cambridge Planetary Science (1)
Cambridge Studies in Biological and Evolutionary Anthropology (1)
Conservation Biology (1)
Cambridge Companions (1)
Cambridge Histories (1)
Cambridge Histories - Literature (1)
Dietary calcium to phosphorus ratio affects postprandial phosphorus concentrations in feline plasma
Jennifer C. Coltherd, Ruth Staunton, Alison Colyer, Matthew Gilham, John Rawlings, Janet Alexander, Darren W. Logan, Richard Butterwick, Phillip Watson, Anne Marie Bakke
Journal: British Journal of Nutrition / Accepted manuscript
Published online by Cambridge University Press: 18 November 2021, pp. 1-27
The impact of dietary phosphorus on chronic renal disease in cats, humans and other species is receiving increasing attention. As calcium (Ca) and phosphorus (P) metabolism are linked, the ratio of Ca:P is an important factor for consideration when formulating diets for cats and other animals. Here, we describe a fully randomized crossover study including 24 healthy, neutered adult cats, investigating post-prandial responses in plasma P, ionised Ca (iCa) and parathyroid hormone (PTH) following one meal (50% of individual metabolic energy requirement) of each of six experimental diets. Diets were formulated to provide P at either 0.75 or 1.5 g/1000kcal (4184kJ) from the soluble phosphorus salt sodium tripolyphosphate (STPP, Na5P3O10), variable levels of organic Ca and P sources, and an intended total Ca:P of ∼1.0, 1.5 or 2.0. For each experimental diet, baseline fasted blood samples were collected prior to the meal, and serial blood samples collected hourly for 6 hours thereafter. For all diets, a significant increase from baseline was observed at 120mins in plasma PTH (p<0.001). The diet containing the highest STPP inclusion level and lowest Ca:P induced the highest peaks in post-prandial plasma P and PTH levels (1.8mmol/l and 27.2pg/ml, respectively) and the longest duration of concentrations raised above baseline were observed at 3 hours for P and 6 hours for PTH. Data indicate that Ca:P modulates postprandial plasma P and PTH. Therefore, when formulating diets containing soluble P salts for cats, increasing the Ca:P ratio should be considered.
The ASKAP Variables and Slow Transients (VAST) Pilot Survey
Tara Murphy, David L. Kaplan, Adam J. Stewart, Andrew O'Brien, Emil Lenc, Sergio Pintaldi, Joshua Pritchard, Dougal Dobie, Archibald Fox, James K. Leung, Tao An, Martin E. Bell, Jess W. Broderick, Shami Chatterjee, Shi Dai, Daniele d'Antonio, Gerry Doyle, B. M. Gaensler, George Heald, Assaf Horesh, Megan L. Jones, David McConnell, Vanessa A. Moss, Wasim Raja, Gavin Ramsay, Stuart Ryder, Elaine M. Sadler, Gregory R. Sivakoff, Yuanming Wang, Ziteng Wang, Michael S. Wheatland, Matthew Whiting, James R. Allison, C. S. Anderson, Lewis Ball, K. Bannister, D. C.-J. Bock, R. Bolton, J. D. Bunton, R. Chekkala, A. P Chippendale, F. R. Cooray, N. Gupta, D. B. Hayman, K. Jeganathan, B. Koribalski, K. Lee-Waddell, Elizabeth K. Mahony, J. Marvil, N. M. McClure-Griffiths, P. Mirtschin, A. Ng, S. Pearce, C. Phillips, M. A. Voronkov
Journal: Publications of the Astronomical Society of Australia / Volume 38 / 2021
Published online by Cambridge University Press: 12 October 2021, e054
Add to cart £25.00 Added to cart An error has occurred,
The Variables and Slow Transients Survey (VAST) on the Australian Square Kilometre Array Pathfinder (ASKAP) is designed to detect highly variable and transient radio sources on timescales from 5 s to $\sim\!5$ yr. In this paper, we present the survey description, observation strategy and initial results from the VAST Phase I Pilot Survey. This pilot survey consists of $\sim\!162$ h of observations conducted at a central frequency of 888 MHz between 2019 August and 2020 August, with a typical rms sensitivity of $0.24\ \mathrm{mJy\ beam}^{-1}$ and angular resolution of $12-20$ arcseconds. There are 113 fields, each of which was observed for 12 min integration time, with between 5 and 13 repeats, with cadences between 1 day and 8 months. The total area of the pilot survey footprint is 5 131 square degrees, covering six distinct regions of the sky. An initial search of two of these regions, totalling 1 646 square degrees, revealed 28 highly variable and/or transient sources. Seven of these are known pulsars, including the millisecond pulsar J2039–5617. Another seven are stars, four of which have no previously reported radio detection (SCR J0533–4257, LEHPM 2-783, UCAC3 89–412162 and 2MASS J22414436–6119311). Of the remaining 14 sources, two are active galactic nuclei, six are associated with galaxies and the other six have no multi-wavelength counterparts and are yet to be identified.
Test–Retest Reliability of a Semi-Structured Interview to Aid in Pediatric Traumatic Brain Injury Diagnosis
Danielle C. Hergert, Veronik Sicard, David D. Stephenson, Sharvani Pabbathi Reddy, Cidney R. Robertson-Benta, Andrew B. Dodd, Edward J. Bedrick, Gerard A. Gioia, Timothy B. Meier, Nicholas A. Shaff, Davin K. Quinn, Richard A. Campbell, John P. Phillips, Andrei A. Vakhtin, Robert E. Sapien, Andrew R. Mayer
Journal: Journal of the International Neuropsychological Society , First View
Published online by Cambridge University Press: 11 August 2021, pp. 1-13
Retrospective self-report is typically used for diagnosing previous pediatric traumatic brain injury (TBI). A new semi-structured interview instrument (New Mexico Assessment of Pediatric TBI; NewMAP TBI) investigated test–retest reliability for TBI characteristics in both the TBI that qualified for study inclusion and for lifetime history of TBI.
One-hundred and eight-four mTBI (aged 8–18), 156 matched healthy controls (HC), and their parents completed the NewMAP TBI within 11 days (subacute; SA) and 4 months (early chronic; EC) of injury, with a subset returning at 1 year (late chronic; LC).
The test–retest reliability of common TBI characteristics [loss of consciousness (LOC), post-traumatic amnesia (PTA), retrograde amnesia, confusion/disorientation] and post-concussion symptoms (PCS) were examined across study visits. Aside from PTA, binary reporting (present/absent) for all TBI characteristics exhibited acceptable (≥0.60) test–retest reliability for both Qualifying and Remote TBIs across all three visits. In contrast, reliability for continuous data (exact duration) was generally unacceptable, with LOC and PCS meeting acceptable criteria at only half of the assessments. Transforming continuous self-report ratings into discrete categories based on injury severity resulted in acceptable reliability. Reliability was not strongly affected by the parent completing the NewMAP TBI.
Categorical reporting of TBI characteristics in children and adolescents can aid clinicians in retrospectively obtaining reliable estimates of TBI severity up to a year post-injury. However, test–retest reliability is strongly impacted by the initial data distribution, selected statistical methods, and potentially by patient difficulty in distinguishing among conceptually similar medical concepts (i.e., PTA vs. confusion).
TJALLING C. KOOPMANS ECONOMETRIC THEORY PRIZE 2018 – 2020
Peter C. B. Phillips
Journal: Econometric Theory / Volume 37 / Issue 4 / August 2021
Published online by Cambridge University Press: 19 August 2021, pp. 849-850
Linear stability of shallow morphodynamic flows
Jake Langham, Mark J. Woodhouse, Andrew J. Hogg, Jeremy C. Phillips
Journal: Journal of Fluid Mechanics / Volume 916 / 10 June 2021
Published online by Cambridge University Press: 12 April 2021, A31
Print publication: 10 June 2021
It is increasingly common for models of shallow-layer overland flows to include equations for the evolution of the underlying bed (morphodynamics) and the motion of an associated sedimentary phase. We investigate the linear stability properties of these systems in considerable generality. Naive formulations of the morphodynamics, featuring exchange of sediment between a well-mixed suspended load and the bed, lead to mathematically ill-posed governing equations. This is traced to a singularity in the linearised system at Froude number ${\textit {Fr}} = 1$ that causes unbounded unstable growth of short-wavelength disturbances. The inclusion of neglected physical processes can restore well posedness. Turbulent momentum diffusion (eddy viscosity) and a suitably parametrised bed load sediment transport are shown separately to be sufficient in this regard. However, we demonstrate that such models typically inherit an associated instability that is absent from non-morphodynamic settings. Implications of our analyses are considered for simple generic closures, including a drag law that switches between fluid and granular behaviour, depending on the sediment concentration. Steady morphodynamic flows bifurcate into two states: dilute flows, which are stable at low ${\textit {Fr}}$, and concentrated flows which are always unstable to disturbances in concentration. By computing the growth rates of linear modes across a wide region of parameter space, we examine in detail the effects of specific model parameters including the choices of sediment erodibility, eddy viscosity and bed load flux. These analyses may be used to inform the ongoing development of operational models in engineering and geosciences.
THE ECONOMETRIC THEORY AWARDS 2021
Journal: Econometric Theory / Volume 37 / Issue 2 / April 2021
Published online by Cambridge University Press: 21 April 2021, p. 408
Print publication: April 2021
Australian square kilometre array pathfinder: I. system description
A. W. Hotan, J. D. Bunton, A. P. Chippendale, M. Whiting, J. Tuthill, V. A. Moss, D. McConnell, S. W. Amy, M. T. Huynh, J. R. Allison, C. S. Anderson, K. W. Bannister, E. Bastholm, R. Beresford, D. C.-J. Bock, R. Bolton, J. M. Chapman, K. Chow, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, I. J. Feain, T. M. O. Franzen, D. George, N. Gupta, G. A. Hampson, L. Harvey-Smith, D. B. Hayman, I. Heywood, C. Jacka, C. A. Jackson, S. Jackson, K. Jeganathan, S. Johnston, M. Kesteven, D. Kleiner, B. S. Koribalski, K. Lee-Waddell, E. Lenc, E. S. Lensson, S. Mackay, E. K. Mahony, N. M. McClure-Griffiths, R. McConigley, P. Mirtschin, A. K. Ng, R. P. Norris, S. E. Pearce, C. Phillips, M. A. Pilawa, W. Raja, J. E. Reynolds, P. Roberts, D. N. Roxby, E. M. Sadler, M. Shields, A. E. T. Schinckel, P. Serra, R. D. Shaw, T. Sweetnam, E. R. Troup, A. Tzioumis, M. A. Voronkov, T. Westmeier
Published online by Cambridge University Press: 05 March 2021, e009
In this paper, we describe the system design and capabilities of the Australian Square Kilometre Array Pathfinder (ASKAP) radio telescope at the conclusion of its construction project and commencement of science operations. ASKAP is one of the first radio telescopes to deploy phased array feed (PAF) technology on a large scale, giving it an instantaneous field of view that covers $31\,\textrm{deg}^{2}$ at $800\,\textrm{MHz}$. As a two-dimensional array of 36 $\times$12 m antennas, with baselines ranging from 22 m to 6 km, ASKAP also has excellent snapshot imaging capability and 10 arcsec resolution. This, combined with 288 MHz of instantaneous bandwidth and a unique third axis of rotation on each antenna, gives ASKAP the capability to create high dynamic range images of large sky areas very quickly. It is an excellent telescope for surveys between 700 and $1800\,\textrm{MHz}$ and is expected to facilitate great advances in our understanding of galaxy formation, cosmology, and radio transients while opening new parameter space for discovery of the unknown.
Towards establishing no observed adverse effect levels (NOAEL) for different sources of dietary phosphorus in feline adult diets: results from a 7-month feeding study
Jennifer C. Coltherd, Janet E. Alexander, Claire Pink, John Rawlings, Jonathan Elliott, Richard Haydock, Laura J. Carvell-Miller, Vincent C. Biourge, Luis Molina, Richard Butterwick, Darren W. Logan, Phillip Watson, Anne Marie Bakke
Journal: British Journal of Nutrition / Volume 126 / Issue 11 / 14 December 2021
Published online by Cambridge University Press: 08 February 2021, pp. 1626-1641
Print publication: 14 December 2021
High dietary phosphorus (P), particularly soluble salts, may contribute to chronic kidney disease development in cats. The aim of the present study was to assess the safety of P supplied at 1 g/1000 kcal (4184kJ) from a highly soluble P salt in P-rich dry format feline diets. Seventy-five healthy adult cats (n 25/group) were fed either a low P control (1·4 g/1000 kcal [4184kJ]; Ca:P ratio 0·97) or one of two test diets with 4 g/1000 kcal (4184 kJ); Ca:P 1·04 or 5 g/1000 kcal (4184kJ); Ca:P 1·27, both incorporating 1 g/1000 kcal (4184 kJ) sodium tripolyphosphate (STPP) – for a period of 30 weeks in a randomised parallel-group study. Health markers in blood and urine, glomerular filtration rate, renal ultrasound and bone density were assessed at baseline and at regular time points. At the end of the test period, responses following transition to a commercial diet (total P – 2·34 g/1000 kcal [4184kJ], Ca:P 1·3) for a 4-week washout period were also assessed. No adverse effects on general, kidney or bone (skeletal) function and health were observed. P and Ca balance, some serum biochemistry parameters and regulatory hormones were increased in cats fed test diets from week 2 onwards (P ≤ 0·05). Data from the washout period suggest that increased serum creatinine and urea values observed in the two test diet groups were influenced by dietary differences during the test period, and not indicative of changes in renal function. The present data suggest no observed adverse effect level for feline diets containing 1 g P/1000 kcal (4184 kJ) from STPP and total P level of up to 5 g/1000 kcal (4184 kJ) when fed for 30 weeks.
Habitual protein intake appears to modulate postprandial muscle protein synthesis responses to feeding in youth but not in older age
S L. Mathewson, A L. Gordon, K. Smith, P J. Atherton, C A. Greig, B E. Phillips
Journal: Proceedings of the Nutrition Society / Volume 80 / Issue OCE1 / 2021
Published online by Cambridge University Press: 30 April 2021, E2
Print publication: 2021
OPTIMAL BANDWIDTH SELECTION IN NONLINEAR COINTEGRATING REGRESSION
Qiying Wang, Peter C. B. Phillips
Journal: Econometric Theory , First View
Published online by Cambridge University Press: 14 December 2020, pp. 1-13
We study optimal bandwidth selection in nonparametric cointegrating regression where the regressor is a stochastic trend process driven by short or long memory innovations. Unlike stationary regression, the optimal bandwidth is found to be a random sequence which depends on the sojourn time of the process. All random sequences $h_{n}$ that lie within a wide band of rates as the sample size $n\rightarrow \infty $ have the property that local level and local linear kernel estimates are asymptotically normal, which enables inference and conveniently corresponds to limit theory in the stationary regression case. This finding reinforces the distinctive flexibility of data-based nonparametric regression procedures for nonstationary nonparametric regression. The present results are obtained under exogenous regressor conditions, which are restrictive but which enable flexible data-based methods of practical implementation in nonparametric predictive regressions within that environment.
Bilingual language experience as a multidimensional spectrum: Associations with objective and subjective language proficiency
Jason W. Gullifer, Shanna Kousaie, Annie C. Gilbert, Angela Grant, Nathalie Giroud, Kristina Coulter, Denise Klein, Shari Baum, Natalie Phillips, Debra Titone
Journal: Applied Psycholinguistics / Volume 42 / Issue 2 / March 2021
Published online by Cambridge University Press: 11 December 2020, pp. 245-278
Print publication: March 2021
Despite the multifactorial space of language experience in which people continuously vary, bilinguals are often dichotomized into ostensibly homogeneous groups. The timing of language exposure (age of acquisition) to a second language (L2) is one well-studied construct that is known to impact language processing, cognitive processing, and brain organization, but recent work shows that current language exposure is also a crucial determinant in these domains. Critically, many indices of bilingual experience are inherently subjective and based on self-report questionnaires. Such measures have been criticized in favor of objective measures of language ability (e.g., naming ability or verbal fluency). Here, we estimate the bilingual experience jointly as a function of multiple continuous aspects of experience, including the timing of language exposure, the amount of L2 exposure across communicative contexts, and language entropy (a flexible measure of language balance) across communicative contexts. The results suggest that current language exposure exhibits distinct but interrelated patterns depending on the socio-experiential context of language usage. They also suggest that, counterintuitively, our sample more accurately self-assesses L2 proficiency than native language proficiency. A precise quantification of the multidimensional nature of bilingualism will enhance the ability of future research to assess language processing, acquisition, and control.
Proteins for bioinspired optical and electronic materials
Patrick B. Dennis, Elizabeth L. Onderko, Joseph M. Slocik, Lina J. Bird, Daniel A. Phillips, Wendy J. Crookes-Goodson, Sarah M. Glaven
Journal: MRS Bulletin / Volume 45 / Issue 12 / December 2020
Published online by Cambridge University Press: 10 December 2020, pp. 1027-1033
Print publication: December 2020
Nature has developed myriad ways for organisms to interact with their environment using light and electronic signals. Optical and electronic properties can be observed macroscopically by measuring light emission or electrical current, but are conferred at the molecular level by the arrangement of small biological molecules, specifically proteins. Here, we present a brief overview of the current uses of proteins for applications in optical and electronic materials. We provide the natural context for a range of light-emitting, light-receiving, and electronically conductive proteins, as well as demonstrate uses in biomaterials. Examples of how genetic engineering has been used to expand the range of functional properties of naturally occurring proteins are provided. We touch on how approaches to patterning and scaffolding optical and electronic proteins can be achieved using proteins with this inherent capability. While much research is still required to bring their use into the mainstream, optical and electronic proteins have the potential to create biomaterials with properties unmatched using conventional chemical synthesis.
The Rapid ASKAP Continuum Survey I: Design and first results
Australian SKA Pathfinder
D. McConnell, C. L. Hale, E. Lenc, J. K. Banfield, George Heald, A. W. Hotan, James K. Leung, Vanessa A. Moss, Tara Murphy, Andrew O'Brien, Joshua Pritchard, Wasim Raja, Elaine M. Sadler, Adam Stewart, Alec J. M. Thomson, M. Whiting, James R. Allison, S. W. Amy, C. Anderson, Lewis Ball, Keith W. Bannister, Martin Bell, Douglas C.-J. Bock, Russ Bolton, J. D. Bunton, A. P. Chippendale, J. D. Collier, F. R. Cooray, T. J. Cornwell, P. J. Diamond, P. G. Edwards, N. Gupta, Douglas B. Hayman, Ian Heywood, C. A. Jackson, Bärbel S. Koribalski, Karen Lee-Waddell, N. M. McClure-Griffiths, Alan Ng, Ray P. Norris, Chris Phillips, John E. Reynolds, Daniel N. Roxby, Antony E. T. Schinckel, Matt Shields, Chenoa Tremblay, A. Tzioumis, M. A. Voronkov, Tobias Westmeier
Published online by Cambridge University Press: 30 November 2020, e048
The Rapid ASKAP Continuum Survey (RACS) is the first large-area survey to be conducted with the full 36-antenna Australian Square Kilometre Array Pathfinder (ASKAP) telescope. RACS will provide a shallow model of the ASKAP sky that will aid the calibration of future deep ASKAP surveys. RACS will cover the whole sky visible from the ASKAP site in Western Australia and will cover the full ASKAP band of 700–1800 MHz. The RACS images are generally deeper than the existing NRAO VLA Sky Survey and Sydney University Molonglo Sky Survey radio surveys and have better spatial resolution. All RACS survey products will be public, including radio images (with $\sim$ 15 arcsec resolution) and catalogues of about three million source components with spectral index and polarisation information. In this paper, we present a description of the RACS survey and the first data release of 903 images covering the sky south of declination $+41^\circ$ made over a 288-MHz band centred at 887.5 MHz.
Severity of Ongoing Post-Concussive Symptoms as a Predictor of Cognitive Performance Following a Pediatric Mild Traumatic Brain Injury
Veronik Sicard, Danielle C. Hergert, Sharvani Pabbathi Reddy, Cidney R. Robertson-Benta, Andrew B. Dodd, Nicholas A. Shaff, David D. Stephenson, Keith Owen Yeates, Jason A. Cromer, Richard A. Campbell, John P. Phillips, Robert E. Sapien, Andrew R. Mayer
Journal: Journal of the International Neuropsychological Society / Volume 27 / Issue 7 / August 2021
Published online by Cambridge University Press: 27 November 2020, pp. 686-696
This study aimed to examine the predictors of cognitive performance in patients with pediatric mild traumatic brain injury (pmTBI) and to determine whether group differences in cognitive performance on a computerized test battery could be observed between pmTBI patients and healthy controls (HC) in the sub-acute (SA) and the early chronic (EC) phases of injury.
203 pmTBI patients recruited from emergency settings and 159 age- and sex-matched HC aged 8–18 rated their ongoing post-concussive symptoms (PCS) on the Post-Concussion Symptom Inventory and completed the Cogstate brief battery in the SA (1–11 days) phase of injury. A subset (156 pmTBI patients; 144 HC) completed testing in the EC (~4 months) phase.
Within the SA phase, a group difference was only observed for the visual learning task (One-Card Learning), with pmTBI patients being less accurate relative to HC. Follow-up analyses indicated higher ongoing PCS and higher 5P clinical risk scores were significant predictors of lower One-Card Learning accuracy within SA phase, while premorbid variables (estimates of intellectual functioning, parental education, and presence of learning disabilities or attention-deficit/hyperactivity disorder) were not.
The absence of group differences at EC phase is supportive of cognitive recovery by 4 months post-injury. While the severity of ongoing PCS and the 5P score were better overall predictors of cognitive performance on the Cogstate at SA relative to premorbid variables, the full regression model explained only 4.1% of the variance, highlighting the need for future work on predictors of cognitive outcomes.
Relatedness and the composition of communities over time: Evaluating phylogenetic community structure in the late Cenozoic record of bivalves
Lucy M. Chang, Phillip L. Skipwith
Journal: Paleobiology / Volume 47 / Issue 2 / March 2021
Published online by Cambridge University Press: 16 October 2020, pp. 301-313
Understanding the mechanisms that prevent or promote the coexistence of taxa at local scales is critical to understanding how biodiversity is maintained. Competitive exclusion and environmental filtering are two processes thought to limit which taxa become established in a community. However, determining the relative importance of the two processes is a complex task, especially when the critical initial stages of colonization cannot be directly observed. Here, we explore the use of phylogenetic community structure for identifying filtering mechanisms in a fossil community. We integrated a time-calibrated molecular phylogeny of bivalve genera with a spatial dataset of late Cenozoic bivalves from the Pacific coast of North America to characterize how the community that was present in the semirestricted San Joaquin Basin (SJB) embayment of present-day California was phylogenetically structured. We employed phylogenetic distance-based metrics across six time bins spanning 27–2.5 Ma and found no evidence of significant clustering or evenness in the SJB community when compared with communities randomly assembled from the regional source pool. Additionally, we found that new colonizers into the SJB were not significantly more or less closely related to native taxa than expected by chance. These findings suggest that neither competitive exclusion nor environmental filtering were overwhelmingly influential factors shaping the composition of the SJB community over time. We further discuss interpretations of these patterns in light of current understandings in community phylogenetics and reiterate the critical role historical perspectives play in how community assembly rules are assessed.
Nutrient-dense protein as a primary dietary strategy in healthy ageing: please sir, may we have more?
E. A. Nunes, B. S. Currier, C. Lim, S. M. Phillips
Journal: Proceedings of the Nutrition Society / Volume 80 / Issue 2 / May 2021
Print publication: May 2021
A progressive decrement in muscle mass and muscle function, sarcopoenia, accompanies ageing. The loss of skeletal muscle mass and function is the main feature of sarcopoenia. Preventing the loss of muscle mass is relevant since sarcopoenia can have a significant impact on mobility and the quality of life of older people. Dietary protein and physical activity have an essential role in slowing muscle mass loss and helping to maintain muscle function. However, the current recommendations for daily protein ingestion for older persons appear to be too low and are in need of adjustment. In this review, we discuss the skeletal muscle response to protein ingestion, and review the data examining current dietary protein recommendations in the older subjects. Furthermore, we review the concept of protein quality and the important role that nutrient-dense protein (NDP) sources play in meeting overall nutrient requirements and improving dietary quality. Overall, the current evidence endorses an increase in the daily ingestion of protein with emphasis on the ingestion of NDP choices by older adults.
Bilinguals benefit from semantic context while perceiving speech in noise in both of their languages: Electrophysiological evidence from the N400 ERP
Kristina Coulter, Annie C. Gilbert, Shanna Kousaie, Shari Baum, Vincent L. Gracco, Denise Klein, Debra Titone, Natalie A. Phillips
Journal: Bilingualism: Language and Cognition / Volume 24 / Issue 2 / March 2021
Although bilinguals benefit from semantic context while perceiving speech-in-noise in their native language (L1), the extent to which bilinguals benefit from semantic context in their second language (L2) is unclear. Here, 57 highly proficient English–French/French–English bilinguals, who varied in L2 age of acquisition, performed a speech-perception-in-noise task in both languages while event-related brain potentials were recorded. Participants listened to and repeated the final word of sentences high or low in semantic constraint, in quiet and with a multi-talker babble mask. Overall, our findings indicate that bilinguals do benefit from semantic context while perceiving speech-in-noise in both their languages. Simultaneous bilinguals showed evidence of processing semantic context similarly to monolinguals. Early sequential bilinguals recruited additional neural resources, suggesting more effective use of semantic context in L2, compared to late bilinguals. Semantic context use was not associated with bilingual language experience or working memory.
ROBUST TESTS FOR WHITE NOISE AND CROSS-CORRELATION
Violetta Dalla, Liudas Giraitis, Peter C. B. Phillips
Published online by Cambridge University Press: 21 September 2020, pp. 1-29
Commonly used tests to assess evidence for the absence of autocorrelation in a univariate time series or serial cross-correlation between time series rely on procedures whose validity holds for i.i.d. data. When the series are not i.i.d., the size of correlogram and cumulative Ljung–Box tests can be significantly distorted. This paper adapts standard correlogram and portmanteau tests to accommodate hidden dependence and nonstationarities involving heteroskedasticity, thereby uncoupling these tests from limiting assumptions that reduce their applicability in empirical work. To enhance the Ljung–Box test for non-i.i.d. data, a new cumulative test is introduced. Asymptotic size of these tests is unaffected by hidden dependence and heteroskedasticity in the series. Related extensions are provided for testing cross-correlation at various lags in bivariate time series. Tests for the i.i.d. property of a time series are also developed. An extensive Monte Carlo study confirms good performance in both size and power for the new tests. Applications to real data reveal that standard tests frequently produce spurious evidence of serial correlation.
Association between surgery with anesthesia and cognitive decline in older adults: Analysis using shared parameter models for informative dropout
Katrina L. Devick, Juraj Sprung, Michelle Mielke, Ronald C. Petersen, Phillip J. Schulte
Journal: Journal of Clinical and Translational Science / Volume 5 / Issue 1 / 2021
Published online by Cambridge University Press: 04 August 2020, e27
Objectives/Goals:
The association between surgery with general anesthesia (exposure) and cognition (outcome) among older adults has been studied with mixed conclusions. We revisited a recent analysis to provide missing data education and discuss implications of biostatistical methodology for informative dropout following dementia diagnosis.
Methods/study population:
We used data from the Mayo Clinic Study of Aging, a longitudinal study of prevalence, incidence, and risk factors for mild cognitive impairment (MCI) and dementia. We fit linear mixed effects models (LMMs) to assess the association between anesthesia exposure and subsequent trajectories of cognitive z-scores assuming data missing at random, hypothesizing that exposure is associated with greater decline in cognitive function. Additionally, we used shared parameter models for informative dropout assuming data missing not at random.
A total of 1948 non-demented participants were included. Median age was 79 years, 49% were female, and 16% had MCI at enrollment. Among median follow-up of 4 study visits over 6.6 years, 172 subjects developed dementia, 270 died, and 594 participants underwent anesthesia. In LMMs, exposure to anesthesia was associated with decline in cognitive function over time (change in annual cognitive z-score slope = −0.063, 95% CI: (−0.080, −0.046), p < 0.001). Accounting for informative dropout using shared parameter models, exposure was associated with greater cognitive decline (change in annual slope = −0.081, 95% CI: (−0.137, −0.026), p = 0.004).
We revisited prior work by our group with a focus on informative dropout. Although the conclusions are similar, we demonstrated the potential impact of novel biostatistics methodology in longitudinal clinical research.
Developing community-based urine sampling methods to deploy biomarker technology for the assessment of dietary exposure
Amanda J Lloyd, Thomas Wilson, Naomi D Willis, Laura Lyons, Helen Phillips, Hayley G Janssen, Martina Stiegler, Long Xie, Kathleen Tailliart, Manfred Beckmann, Leo Stevenson, John C Mathers, John Draper
Journal: Public Health Nutrition / Volume 23 / Issue 17 / December 2020
Published online by Cambridge University Press: 11 June 2020, pp. 3081-3092
Obtaining objective, dietary exposure information from individuals is challenging because of the complexity of food consumption patterns and the limitations of self-reporting tools (e.g., FFQ and diet diaries). This hinders research efforts to associate intakes of specific foods or eating patterns with population health outcomes.
Dietary exposure can be assessed by the measurement of food-derived chemicals in urine samples. We aimed to develop methodologies for urine collection that minimised impact on the day-to-day activities of participants but also yielded samples that were data-rich in terms of targeted biomarker measurements.
Urine collection methodologies were developed within home settings.
Different cohorts of free-living volunteers.
Home collection of urine samples using vacuum transfer technology was deemed highly acceptable by volunteers. Statistical analysis of both metabolome and selected dietary exposure biomarkers in spot urine collected and stored using this method showed that they were compositionally similar to urine collected using a standard method with immediate sample freezing. Even without chemical preservatives, samples can be stored under different temperature regimes without any significant impact on the overall urine composition or concentration of forty-six exemplar dietary exposure biomarkers. Importantly, the samples could be posted directly to analytical facilities, without the need for refrigerated transport and involvement of clinical professionals.
This urine sampling methodology appears to be suitable for routine use and may provide a scalable, cost-effective means to collect urine samples and to assess diet in epidemiological studies.
|
CommonCrawl
|
Christine must buy at least $45$ fluid ounces of milk at the store. The store only sells milk in $200$ milliliter bottles. If there are $33.8$ fluid ounces in $1$ liter, then what is the smallest number of bottles that Christine could buy? (You may use a calculator on this problem.)
First we convert the amount of milk Christine must buy from ounces to liters. We use the conversion factor $\frac{1\ \text{L}}{33.8\ \text{fl.oz}}$ to obtain $45\ \text{fl.oz} \cdot \frac{1\ \text{L}}{33.8\ \text{fl.oz}} \approx 1.331\ \text{L}$. There are $1000\ \text{mL}$ in one liter, and $\frac{1331}{200} \approx 6.657$, so Christine must buy at least $\boxed{7}$ bottles of milk at the store.
|
Math Dataset
|
\begin{document}
\begin{abstract} \noindent
It has been previously shown by the authors that a directed graph on a linearly ordered set of edges (ordered graph) with adjacent unique source and sink (bipolar digraph) has a unique fully optimal spanning tree, that satisfies a simple criterion on fundamental cycle/cocycle directions. This result yields, for any ordered graph, a canonical bijection between bipolar orientations and spanning trees with internal activity 1 and external activity 0 in the sense of the Tutte polynomial. This bijection can be extended to all orientations and all spanning trees, yielding the active bijection, presented for graphs in a companion paper.
In this paper, we specifically address the problem of the computation of the fully optimal spanning tree of an ordered bipolar digraph. In contrast with the inverse mapping, built by a straightforward single pass over the edge set, the direct computation is not easy and had previously been left aside.
We give two independent constructions. The first one is a deletion/contraction recursion, involving an exponential number of minors. \emevder{It is structurally significant but } It is structurally significant but it is efficient only for building the whole bijection (i.e. all images) at once. The second one is more complicated and is the main contribution of the paper. It involves
just one minor for each edge of the resulting spanning tree, and it is a translation and an adaptation in the case of graphs,
in terms of weighted cocycles, of a general geometrical linear programming type
algorithm,
which allows for a polynomial time~complexity.
\emevder{mettre which yields a polynomial time~complexity ?} \emevder{parler de wieght de cocycles ?}
\end{abstract}
\title{Computing the fully optimal spanning tree \ of an ordered bipolar directed graph}
\emevder{faire corrections d'anglais, comme dans chapter, voir fichier anglais.tex passage "SUR CHAPTER"}
\section{Introduction} \label{sec:intro}
\emevder{*** HYPER IMPORTANT *** remplacer 'greater than' par 'greater than or equal to' quand necessaire *****}
In a previous paper \cite{GiLV05} (see also \cite{AB1}), we showed that a directed graph ${\ov G}$ on a linearly ordered set of edges, with adjacent unique source and sink connected by the smallest edge (ordered bipolar digraph), has a unique remarkable spanning tree that satisfies a simple criterion on fundamental cycle/cocycle directions, and that we call \emph{the fully optimal spanning tree $\alpha({\ov G})$ of ${\ov G}$}.
Associating bipolar orientations of an ordered graph with their fully optimal spanning trees provides a canonical bijection with spanning trees with internal activity 1 and external activity 0 in the sense of the Tutte polynomial \cite{Tu54}. It is a classical result from \cite{Za75} that those two sets have the same size, also known as the $\beta$-invariant $\beta(G)$ of the graph \cite{Cr67}. We call this bijection \emph{the uniactive bijection of $G$}.
This bijection can be extended to all orientations and all spanning trees, yielding \emph{the active bijection}, introduced in terms of graphs first in \cite{GiLV05}, and detailed next in \cite{ABG2}, for which the present paper is a complementary companion paper.
Beyond graphs, the general context of the active bijection is oriented matroids, as studied by the authors in a series of papers, even more detailed, notably~\cite{AB1, AB2-b, AB3, AB4}.\break
See the introductions of \cite{ABG2} (or also \cite{AB2-b}) for an overview, more information and references.
The general purpose of these works is to study graphs (or hyperplane arrangements, or oriented matroids) on a linearly ordered set of edges (or ground set), under
various
structural and enumerative aspects.
In the present paper we address the problem of computing the fully optimal spanning tree. Its existence and uniqueness is a tricky combinatorial theorem, proved in \cite[Theorem 4]{GiLV05} (see also the companion paper \cite[Section \ref{ABG2-subsec:fob}]{ABG2} for a short sum up in graphs, see also \cite[Theorem 4.5]{AB1} for a generalization and a geometrical interpretation in oriented matroids, see also \cite[Section 5]{AB2-b}\emevder{verifier ref} for a summary of various interpretations and implications of this theorem). As recalled in Section \ref{subsec:prelim-fob}, the inverse mapping, producing a bipolar orientation for which a given spanning tree is fully optimal, is very easy to compute by a single pass over the ordered set of edges. But the direct computation is complicated and it had not been addressed in previous papers. When generalized to real hyperplane arrangements, the problem contains and strengthens the real linear programming problem (as shown in~\cite{AB1}, hence the name \emph{fully optimal}).
This ``one way function'' feature is a noteworthy aspect of the active bijection. Here, we give two independent constructions to compute the mapping, that is to compute the fully optimal spanning tree of an ordered bipolar digraph.
The first construction, in Section \ref{sec:induction}, is recursive, by deletion/contraction of the greatest element.
Let us observe that it is usual to have some deletion/contraction constructions when the Tutte polynomial is involved, and that this construction fits a general framework involving all orientations and spanning trees, presented in \cite[Section \ref{ABG2-subsec:ind-framework}]{ABG2}\emevder{ou juste citer section \cite[Section \ref{ABG2-sec:induction}]{ABG2} pas sousection} and detailed in \cite{AB4}.
This construction of the mapping has a short statement and proof, and it can be used to efficiently build the whole bijection at once (i.e. all the images simultaneously, see Remark \ref{rk:ind-10}). So, it is satisfying for the structural understanding and for a global approach,
but it is not satisfying in terms of computational complexity for building one single image
as it involves an exponential number of minors.
The second construction, in Section \ref{sec:fob}, is more technical and is the main contribution of the paper. It is
efficient from the computational complexity viewpoint because it involves
only one minor for each edge of the resulting spanning tree,
and it consists in searching, successively in each minor, for the smallest cocycle with respect to a linear ordering of the set of cocycles induced by a suitable weight function.
This algorithm is an adaptation in the case of graphs (implicitly using that graphic matroids are binary) of a general geometrical construction obtained by elaborations on
pseudo/real linear programming (in oriented matroids / real hyperplane arrangements). Briefly, the ordering of cocycles is a substitue for some
multiobjective programming, where a vertex (geometrical counterpart of a cocycle) is optimized with respect to a sequence of objective functions (transformed here into weights on edges).
By this way, the fully optimal spanning tree can be computed in polynomial time. See Remark \ref{rk:lp} and the end of Section \ref{sec:fob} for more discussion. See \cite{AB3} for the general geometrical construction.
See also \cite{GiLV09} for a short formulation of the same algorithm in terms of real hyperplane arrangements. See \cite{AB1} for the primary relations between the full optimality criterion and usual linear programming optimality (see also \cite{GiLV04} in the uniform case).
In addition, let us recall from \cite[Section 4]{GiLV05} (see also \cite[Section \ref{ABG2-subsec:fob}]{ABG2} for more details) that the bijection between bipolar orientations and their fully optimal spanning trees directly yields a bijection between cyclic-bipolar orientations (the strongly connected orientations obtained from bipolar orientations by reversing the source-sink edge) and spanning trees with internal activity $0$ and external activity~$1$. Hence, the algorithms developed here can also be used for this second bijection. Let us mention that
this framework involves a remarkable duality property, called \emph{the active duality},
which is reflected in several ways. First, those two bijections are related to each other consistently with cycle/cocycle duality (that is oriented matroid duality, which extends planar graph duality, see \cite[Section \ref{ABG2-subsec:fob}]{ABG2} in graphs, see also \cite[Section 5]{AB1}, or \cite[Section 5]{AB2-b}\emevder{----verifier ref----} for a complete overview). Second, this duality property can be seen as a strengthening of linear programming duality (see \cite[Section 5]{AB1}).
Third, it is related to the equivalence of two dual formulations in the deletion/contraction construction (see Remark~\ref{rk:ind-10-equivalence}).
Lastly, it is important to point out that the two aforementioned constructions of the fully optimal spanning tree do not give a new proof of its existence and uniqueness:
on the contrary,
this crucial fundamental result is used to ensure the correctness of these two constructions.
\section{Preliminaries} \label{sec:prelim}
\subsection{Usual terminology and tools from oriented matroid theory} \label{subsec:prelim-tools}
\emevder{remplacer le plus possible par ordered graph ?}
All graphs considered in this paper will be connected. They can have loops and multiple edges.
A \emph{digraph} is a directed graph, and an \emph{ordered graph} is a graph $G=(V,E)$ on a linearly ordered set of edges $E$. Edges of a directed graph are supposed to be \emph{directed} or equally \emph{oriented}. A directed graph will be denoted with an arrow, ${\ov G}$, and the underlying undirected graph without arrow, $G$.
The cycles, cocycles, and spanning trees of a graph $G=(V,E)$ are considered as subsets of $E$, hence their edges can be called their \emph{elements}. The cycles and cocycles of $G$ are always understood as being minimal for inclusion.
Given $F\subseteq E$, we denote $G(F)$ the graph obtained by restricting the edge set of $G$ to $F$, that is the minor $G\setminus (E\setminus F)$ of $G$.
A minor ${\ov G}/\{e\}$, resp. ${\ov G}\backslash\{e\}$, for $e\in E$, can be denoted for short ${\ov G}/e$, resp. ${\ov G}\backslash e$.
For $e\in E$, we denote $-_e{\ov G}$ the digraph obtained by reversing the direction of the edge $e$ in ${\ov G}$.
Let $G$ be an ordered (connected) graph and let $T$ be a spanning tree of $G$.
For $t\in T$, the \emph{fundamental cocycle} of $t$ with respect to $T$, denoted $C_G^*(T;t)$, or $C^*(T;t)$ for short, is the cocycle joining the two connected components of $T\setminus \{t\}$. Equivalently, it is the unique cocycle contained in $(E\setminus T)\cup\{t\}$. For $e\not\in T$, the \emph{fundamental cycle} of $e$ with respect to $T$, denoted $C_G(T;e)$, or $C(T;e)$ for short,
is the unique cycle contained in $T\cup\{e\}$.
The technique used in the paper is close from oriented matroid technique, which notably means that it focuses on edges, whereas vertices are usually not used.
Given an orientation ${\ov G}$ of a graph $G$, we will have to deal with directions of edges in cycles and cocycles of the underlying graph $G$, and, sometimes, to deal with combinations of cycles or cocycles. To achieve this, it is convenient to use some practical notations and classical properties from oriented matroid theory~\cite{OM99}.
A \emph{signed edge subset} is a subset $C\subseteq E$ provided with a partition into a positive part $C^+$ and a negative part $C^-$. A cycle, resp. cocycle, of $G$ provides two opposite signed edge subsets called \emph{signed cycles}, resp. \emph{signed cocycles}, of ${\ov G}$ by giving a sign in $\{+,-\}$ to each of its elements accordingly with the orientation ${\ov G}$ of $G$ the natural way.
Precisely: two edges having the same direction with respect to a running direction of a cycle will have the same sign in the associated signed cycles, and two edges having the same direction with respect to the partition of the vertex set induced by a cocycle will have the same sign in the associated signed cocycles.
In particular, a directed cycle, resp. a directed cocycle, of ${\ov G}$ corresponds to a signed cycle, resp. a signed cocycle, all the elements of which are positive (and to its opposite, all the elements of which are negative).
We will often use the same notation $C$ either for a signed edge subset (formally a couple $(C^+,C^-)$, e.g. signed cycle) or for the underlying subset ($C^+\uplus C^-$, e.g. graph cycle).
\emevder{eventuellement faire en sorte d'enlever ce qui suit, mais bon sinon laisser car ca fait chier de nettoyer et on s'en sert dans autres papiers...} When necessary, given a spanning tree $T$ of $G$ and an edge $t\in T$, resp. an edge $e\not\in T$, the fundamental cocycle $C^*(T;t)$, resp. the fundamental cycle $C(T;e)$, induces two opposite signed cocycles, resp. signed cycles, of ${\ov G}$; then, by convention, the (signed) fundamental cocycle $C^*(T;t)$, resp. the (signed) fundamental cycle $C(T;e)$, is considered to be the one in which $t$ is positive, resp. $e$ is positive.
The next three tools can be skipped in a first reading, as they will only be used in the proof of the main result of the paper, namely Theorem \ref{th:fob}. First, let us recall the definition of the \emph{composition} $C\circ D$ between two signed edge subsets as the edge subset $C\cup D$ with signs inherited from $C$ for the element of $C$ and inherited from $D$ for the elements of $D\setminus C$. We will use the classical {\it orthogonality} property between a cocycle $D$ and a composition of cycles $C$ of ${\ov G}$, that is: $C\cap D\not=\emptyset$ implies $(C^+\cap D^+)\cup(C^-\cap D^-)\not=\emptyset$ and $(C^-\cap D^+)\cup(C^+\cap D^-)\not=\emptyset$.
Second, we recall that, given two cocycles $C$ and $C'$ of ${\ov G}$, and an element $f \in C \cup C'$ which does not have opposite signs in $C$ and $C'$, there exists a cocycle $D$ obtained by \emph{elimination between $C$ and $C'$ preserving $f$} such that $f\in D$, $D^+ \subseteq C^+\cup C'^+$, $D^- \subseteq C^-\cup C'^-$, and $D$ contains no element of $C\cap C'$ having opposite signs in $C$ and $C'$. This last property is a strengthening of the oriented matroid elimination property in the particular case of digraphs, a short proof of which is the following. Assume $C$ defines the partition $(C_1,C_2)$ of the set of vertices, and $C'$ defines the partition $(C'_1,C'_2)$, with a positive sign given to edges from $C_1$ to $C_2$ in $C$ and from $C'_1$ to $C'_2$ in $C'$. Then the edges having opposite signs in $C$ and $C'$ are those joining $C_1\cap C'_2$ and $C_2\cap C'_1$, then, with $V'= (C_1\cap C'_2) \cup (C_2\cap C'_1)$, the cut defined by the partition $(V',E\setminus V')$ contains a cocycle answering the problem. \emevder{faire lemme de cette elimination speciale ?}
Third, we recall the following easy property. Let $A,B\subseteq E$ with $A\cap B=\emptyset$, such that the minor $G/B\backslash A$ is connected (or equivalently: $G/B\backslash A$ has the same rank as $G/B$). If $D'$ is a cocycle of $G/B\backslash A$, then there exists a unique cocycle $D$ of $G$ such that $D\cap B=\emptyset$ and $D\setminus A=D'$. If the graphs are directedd then $D$ has the same signs as $D'$ on the elements of $D'$. We say that $D'$ is \emph{induced by} $D$, or that $D$ \emph{induces} $D'$.
\subsection{Bipolar orientations and fully optimal spanning trees} \label{subsec:prelim-fob}
We say that a directed graph ${\ov G}$ on the edge set $E$ is {\it bipolar with respect to $p\in E$} if ${\ov G}$ is acyclic and has a unique source and a unique sink which are the extremities of $p$. In particular, if ${\ov G}$ consists in a single edge $p$ which is an isthmus, then ${\ov G}$ is bipolar with respect to $p$. Equivalently, ${\ov G}$ is bipolar with respect to $p$ if and only if every edge of ${\ov G}$ is contained in a directed cocycle and every directed cocycle contains $p$
(for information: in other words, ${\ov G}$ has dual-orientation-activity $1$ and orientation-activity $0$, in the sense of \cite{LV84a}, see \cite{GiLV05, AB2-b} or \cite[Section \ref{ABG2-subsec:prelim-beta}]{ABG2}).
Another characterization is the following: ${\ov G}$ is bipolar w.r.t. $p$ if and only if ${\ov G}$ is acyclic and $-_p{\ov G}$ is strongly connected (for information: those orientations $-_p{\ov G}$ play an equivalent dual role, see the discussion at the end of Section \ref{sec:intro} or \cite[Section \ref{ABG2-subsec:fob}]{ABG2}).
\begin{definition} \label{def:acyc-alpha}
Let ${\ov G}=(V,E)$ be a directed graph, on a linearly ordered set of edges, which is bipolar with respect to the minimal element $p$ of $E$. The {\it fully optimal spanning tree} $\alpha({\ov G})$ of ${\ov G}$ is the unique spanning tree $T$ of $G$ such~that:
$\bullet$ for all $b\in T\setminus p$, the directions (or the signs) of $b$ and $\mathrm{min}(C^*(T;b))$ are opposite in $C^*(T;b)$;
$\bullet$ for all $e\in E\setminus T$, the directions (or the signs) of $e$ and $\mathrm{min}(C(T;e))$ are opposite in $C(T;e)$.
\end{definition}
The existence and uniqueness of such a spanning tree is the main result of \cite{GiLV05, AB1}, along with the next theorem.
Notice that a directed graph and its opposite are mapped onto the same spanning tree.
We say that spanning tree $T$ has \emph{internal activity $1$ and external activity $0$}, or equivalently that $T$ is \emph{uniactive internal}, if: $min(E)\in T$; for every $t\in T\setminus min(E)$ we have $t\not=\mathrm{min}(C^*(T;t))$; and for every $e\in E\setminus T$ we have $e\not=\mathrm{min}(C(T;e))$.
In this paper, it is not necessary to define further the notion of activities of spanning trees, which comes from the theory of the Tutte polynomial (see \cite{GiLV05, ABG2, AB2-b}). \emevder{verifier que activities pas utilisees}
For information (not used in the paper), the number of uniactive internal spanning trees of $G$ does not depend on the linear ordering of $E$ and is known as the $\beta$-invariant $\beta(G)$ of $G$~\cite{Cr67}, while the number of bipolar orientations w.r.t. $p$
does not depend on $p$ and is equal to $2.\beta(G)$ \cite{Za75}. \emevder{OU : while the number of bipolar orientations w.r.t. $p$, with the same fixed orientation for $p$, does not depend on $p$ and is also equal to $\beta(G)$ \cite{Za75}.} \emevder{faut-il laisser ce "for information..."?}
\begin{thm}[Key Theorem \cite{GiLV05, AB1}] \label{thm:bij-10} Let $G$ be a graph on a linearly ordered set of edges $E$ with $\mathrm{min}(E)=p$. The mapping
${\ov G}\mapsto \alpha({\ov G})$ yields a bijection between all bipolar orientations of $G$ w.r.t.~$p$, with the same fixed orientation for $p$, and all uniactive internal spanning trees of $G$.
\end{thm}
The bijection of Theorem \ref{thm:bij-10} is called \emph{the uniactive bijection} of the ordered graph $G$.
For completeness of the paper (though not used thereafter), let us recall that, from the constructive viewpoint, this bijection was built in \cite{GiLV05,AB1} by the inverse mapping, provided by a single pass algorithm over $T$ and fundamental cocycles, or dually over $E\setminus T$ and fundamental cycles. This algorithm is illustrated in \cite[Figure 1]{GiLV05}, on the same example that we will use in Section~\ref{sec:fob}. Equivalently, the inverse mapping can be obviously built by a single pass over $E$, choosing edge directions one by one so that the criterion of Definition \ref{def:acyc-alpha} is satisfied.
We recall this algorithm below
(as done also in \cite{AB1} and \cite[Section \ref{ABG2-subsec:basori}]{ABG2}). \emevder{pas utile de repeter cet algo ici... on peut juste remarquer qu'il suffit de signer elemetns un par un et renvoyer a ABG2... en meme temps ca complete... laisser ou pas ?} The reader interested in a geometric intuition on the full optimality sign criterion can have a look at the equivalent definitions, illustrations and interpretations given in \cite{AB1, AB2-b, AB3}. \emevder{utile de dire ce geometric intutiion? reference a remark LP ?}
\begin{prop}[{self-dual reformulation of \cite[Proposition 3]{GiLV05}}] \label{prop:alpha-10-inverse} Let $G$ be a graph on a linearly ordered set of edges $E=\{e_1,\dots,e_n\}_<$. For a uniactive internal spanning tree $T$ of $G$,
the two opposite orientations of $G$ whose image under $\alpha$ is $T$ are computed by the following algorithm.
\begin{algorithme}
Orient $e_1$ arbitrarily.\par For $k$ from $2$ to $n$ do\par \hskip 10 mm if $e_k\in T$ then \par \hskip 20 mm let $a=\mathrm{min} (C^*(T;e_k))$\par \hskip 20 mm orient $e_k$ in order to have $a$ and $e_k$ with opposite directions in $C^*(T;e_k)$\par \hskip 10 mm if $e_k\not\in T$ then \par \hskip 20 mm let $a=\mathrm{min} (C(T;e_k))$\par \hskip 20 mm orient $e_k$ in order to have $a$ and $e_k$ with opposite directions in $C(T;e_k)$\par \end{algorithme}
\end{prop}
Lastly, in the proof of the main result Theorem \ref{th:fob}, we will use the
following \emph{alternative characterization of the fully optimal spanning tree}, equivalent to Definition \ref{def:acyc-alpha} by \cite[Proposition 3]{GiLV05} or by \cite[Proposition 3.3]{AB1}. Let ${\ov G}=(V,E)$ be an ordered directed graph, which is bipolar with respect to $p=\mathrm{min}(E)$. Let $T=\alpha({\ov G})$.
With the convention that an edge has a positive sign in its fundamental cycle or cocycle w.r.t. a spanning tree, with $T=b_1<b_2<...<b_r$, and with $E\setminus T=\{c_1<...<c_{n-r}\}$,
we have:
\begin{itemize} \itemsep=0mm \partopsep=0mm \topsep=0mm \parsep=0mm \item $b_1=p=\mathrm{min}(E)$; \item for every $1\leq i\leq r$, all elements of $\cup_{j\leq i} C^*(T;b_j)\setminus \cup_{j\leq i-1} C^*(T;b_j)$ are positive in $C^*(T;b_i)$; \item for every $1\leq i\leq n-r$, all elements of $\cup_{j\leq i} C(T;c_j)\setminus \cup_{j\leq i-1} C(T;c_j)$ are positive in $C(T;c_i)$ except $p=\mathrm{min}(E)$. \end{itemize} Equivalently, as formulated in \cite{AB1}, in terms of compositions of signed subsets (see Section \ref{subsec:prelim-tools}), the two latter properties can be formulated the following way: $C^*(T;b_1)\circ ...\circ C^*(T;b_r)$ is positive; and $C(T;c_1)\circ ...\circ C(T;c_{n-r})$ is positive except on $p$.
\emevder{rk (deja pris en compte nprmalement) : elemetns de T sont b dans prelim et dans fob dans preuve de algo final, mais t au cours de algo final... rk: $t$ est deja polynome de Tutte dans abg2 mais pas ici, donc pas de pb}
\section{Recursive construction by deletion/contraction} \label{sec:induction}
This section investigates
a recursive deletion/contraction construction of the fully optimal spanning tree of an ordered bipolar digraph. See Section \ref{sec:intro} for an outline.
This construction
is developed further in \cite[Section \ref{ABG2-sec:induction}]{ABG2} by giving deletion/contraction constructions involving all orientations and spanning trees \footnote{Note: Theorem \ref{thm:ind-10} is also stated in the companion paper \cite{ABG2}, which is also submitted. At the moment, we give its proof in both papers, including Lemma \ref{lem:induc-fob-basis}, but we should eventually remove this repetition and give the proof in only one of the two papers.} , and even more further in \cite{AB4} by generalizing such constructions in oriented matroids. Let us mention that the construction, as it is formally stated below, extends directly to compute the fully otpimal basis of a bounded region of an oriented matroid, as addressed in \cite{AB1}.
\begin{lemma} \label{lem:induc-fob-basis} Let ${\ov G}$ be a digraph, on a linearly ordered set of edges $E$, which is bipolar w.r.t. $p=\mathrm{min}(E)$. Let $\omega$ be the greatest element of $E$. Let $T=\alpha({\ov G})$. If $\omega\in T$ then ${\ov G}/\omega$ is bipolar w.r.t. $p$ and $T\setminus\{\omega\}=\alpha({\ov G}/\omega)$. If $\omega\not\in T$ then ${\ov G}\backslash\omega$ is bipolar w.r.t. $p$ and $T=\alpha({\ov G}\backslash\omega)$. In particular, we get that ${\ov G}/\omega$ is bipolar w.r.t. $p$ or ${\ov G}\backslash\omega$ is bipolar w.r.t. $p$. \end{lemma}
\begin{proof} First, let us recall that if a spanning tree of a directed graph satisfies the criterion of Definition \ref{def:acyc-alpha}, then this directed graph is necessarily bipolar w.r.t.its smallest edge. This is implied by \cite[Propositions 2 and 3]{GiLV05}, or also stated explicitly in \cite[Proposition 3.2]{AB1}, and this is easy to see: if the criterion is satisfied, then the spanning tree is internal uniactive (by definitions of internal/external activities) and the digraph is determined up to reversing all edges (see Proposition \ref{prop:alpha-10-inverse}\emevder{ATTENTION ptet pas dans ABG2 ???}), which implies that the digraph is in the inverse image of $T$ by the uniactive bijection of Theorem \ref{thm:bij-10} and that it is bipolar w.r.t. its smallest edge.
Assume that $\omega\in T$. Obviously, the fundamental cocycle of $b\in T\setminus\{\omega\}$ w.r.t. $T\setminus\{\omega\}$ in $G/\omega$ is the same as the fundamental cocycle of $b$ w.r.t. $T$ in $G$.
And the fundamental cycle of $e\not\in T$ w.r.t. $T\setminus\{\omega\}$ in $G/\omega$ is obtained by removing $\omega$ from the fundamental cycle of $e$ w.r.t. $T$ in $G$. Hence, those fundamental cycles and cocycles in $G/\omega$ satisfy the criterion of Definition \ref{def:acyc-alpha}, hence ${\ov G}/\omega$ is bipolar w.r.t. $p$ and $T\setminus\{\omega\}=\alpha({\ov G}/\omega)$.
Similarly (dually in fact), assume that $\omega\not\in T$.
The fundamental cocycle of $b\in T$ w.r.t. $T\setminus\{\omega\}$ in $G\backslash\omega$ is obtained by removing $\omega$ from the fundamental cocycle of $b$ w.r.t. $T$ in $G$.
And the fundamental cycle of $e\not\in T\setminus\{\omega\}$ w.r.t. $T\setminus\{\omega\}$ in $G\backslash\omega$ is the same as the fundamental cycle of $e$ w.r.t. $T$ in $G$. Hence, those fundamental cycles and cocycles in $G\backslash\omega$ satisfy the criterion of Definition \ref{def:acyc-alpha}, hence ${\ov G}\backslash\omega$ is bipolar w.r.t. $p$ and $T\setminus\{\omega\}=\alpha({\ov G}\backslash\omega)$.
Note that the fact that either ${\ov G}/\omega$ is bipolar w.r.t. $p$, or ${\ov G}\backslash\omega$ is bipolar w.r.t. $p$ could also easily be directly proved in terms of digraph properties. \end{proof}
\begin{thm} \label{thm:ind-10}
Let ${\ov G}$ be a digraph, on a linearly ordered set of edges $E$, which is bipolar w.r.t. $p=\mathrm{min}(E)$.
The fully optimal spanning tree $\alpha({\ov G})$ of ${\ov G}$ satisfies the following inductive definition,
where $\omega=\mathrm{max}(E)$.
\emevder{attention newpage dessous}
\break
\begin{algorithme}
If $|E|=1$ then $\alpha({\ov G})=\omega$.\par If $|E|>1$ then:\par
\hskip 10mm If ${\ov G}/\omega$ is bipolar w.r.t. $p$ but not ${\ov G}\backslash \omega$ then $\alpha({\ov G})=\alpha({\ov G}/\omega)\cup\{\omega\}$.
\hskip 10mm If ${\ov G}\backslash\omega$ is bipolar w.r.t. $p$ but not ${\ov G}/ \omega$ then $\alpha({\ov G})=\alpha({\ov G}\backslash\omega)$.
\hskip 10mm If both ${\ov G}\backslash\omega$ and ${\ov G}/\omega$ are bipolar w.r.t. $p$ then:
\hskip 20mm let $T'=\alpha({\ov G}\backslash\omega)$, $C=C_{\ov G}(T';\omega)$ and $e=\mathrm{min}(C)$\par
\hskip 20mm if $e$ and $\omega$ have opposite directions in $C$ then $\alpha({\ov G})=\alpha({\ov G}\backslash\omega)$;\par
\hskip 20mm
if $e$ and $\omega$ have the same directions in $C$ then $\alpha({\ov G})=\alpha({\ov G}/\omega)\cup\{\omega\}$.\par
{\sl or equivalently:}\par \hskip 20mm let $T''=\alpha({\ov G}/\omega)$, $D=C^*_{\ov G}(T''\cup\omega;\omega)$ and $e=\mathrm{min}(D)$\par
\hskip 20mm if $e$ and $\omega$ have opposite directions in $D$ then $\alpha({\ov G})=\alpha({\ov G}/\omega)\cup\{\omega\}$;\par
\hskip 20mm if $e$ and $\omega$ have the same directions in $D$ then $\alpha({\ov G})=\alpha({\ov G}\backslash\omega)$.
\end{algorithme} \end{thm}
\begin{proof}
By Lemma \ref{lem:induc-fob-basis}, at least one minor among $\{{\ov G}/\omega, {\ov G}\backslash\omega\}$ is bipolar w.r.t. $p$. If exactly one minor among $\{{\ov G}/\omega, {\ov G}\backslash\omega\}$ is bipolar w.r.t. $p$, then by Lemma \ref{lem:induc-fob-basis} again, the above definition is implied. Assume now that both minors are bipolar w.r.t. $p$.
Consider $T'=\alpha({\ov G}\backslash\omega)$. Fundamental cocycles of elements in $T'$ w.r.t. $T'$ in ${\ov G}$ are obtained by removing $\omega$ from those in ${\ov G}\backslash\omega$. Hence they satisfy the criterion from Definition \ref{def:acyc-alpha}. Fundamental cycles of elements in $E\setminus (T'\cup\{\omega\})$ w.r.t. $T'$ in ${\ov G}$ are the same as in ${\ov G}\backslash\omega$. Hence they satisfy the criterion from Definition \ref{def:acyc-alpha}. Let $C$ be the fundamental cycle of $\omega$ w.r.t. $T'$. If $e$ and $\omega$ have opposite directions in $C$, then $C$ satisfies the criterion from Definition \ref{def:acyc-alpha}, and
$\alpha({\ov G})=T'$. Otherwise, we have $\alpha({\ov G})\not=T'$, and, by Lemma \ref{lem:induc-fob-basis}, we must have $\alpha({\ov G})=\alpha({\ov G}/\omega)\cup\{\omega\}$.
The second condition involving $T''=\alpha({\ov G}/\omega)$ is proved in the same manner. Since it yields the same mapping $\alpha$, then this second condition is actually equivalent to the first one, and so it can be used as an alternative. Note that the fact that these two conditions are equivalent is difficult and proved here in an indirect way (actually this fact is equivalent to the key result that $\alpha$ yields a bijection), see Remark \ref{rk:ind-10-equivalence}.
\end{proof}
\begin{cor} \label{cor:ind-10}
Using notations of Theorem \ref{thm:ind-10}, if $-_\omega{\ov G}$ is bipolar w.r.t. $p$ then the above algorithm of Theorem \ref{thm:ind-10} builds at the same time $\alpha({\ov G})$ and $\alpha(-_\omega{\ov G})$, we have: $$\Bigl\{\ \alpha({\ov G}),\ \alpha(-_\omega{\ov G})\ \Bigr\}\ =\ \Bigl\{\ \alpha({\ov G}\backslash\omega),\ \alpha({\ov G}/\omega)\cup\{\omega\}\ \Bigr\}.$$
Also, we have that $-_\omega{\ov G}$ is bipolar w.r.t. $p$ if and only if ${\ov G}\backslash\omega$ and ${\ov G}/\omega$ are bipolar w.r.t. $p$.
\end{cor}
\begin{proof} Direct by Theorem \ref{thm:ind-10} and Theorem \ref{thm:bij-10} (bijection property). \end{proof}
\begin{remark}[computational complexity] \label{rk:difficult}
\rm
The algorithm of Theorem \ref{thm:ind-10} is exponential time, as it may involve an exponential number of minors. Indeed, in general, one needs to compute both $\alpha({\ov G}\backslash\omega)$ and $\alpha({\ov G}/\omega)$ in order to compute $\alpha({\ov G})$ (because one might compute $T'=\alpha({\ov G}\backslash\omega)$ and finally set
$\alpha({\ov G})=\alpha({\ov G}/\omega)\cup\{\omega\}$, or equivalently one might compute $T''=\alpha({\ov G}/\omega)$ and finally set $\alpha({\ov G})=\alpha({\ov G}\backslash\omega)$). And hence, in general, one may need to compute $\alpha({\ov G}\backslash\omega\backslash\omega')$, $\alpha({\ov G}\backslash\omega/\omega')$, $\alpha({\ov G}/\omega\backslash\omega')$ and $\alpha({\ov G}/\omega/\omega')$, with $\omega'=\mathrm{max}(E\backslash \{\omega\})$, and so on... Finally, with $|E|=n$, the number of calls to the algorithm to build $\alpha({\ov G})$ is $O(2^0+2^1+\ldots +2^{n-1})$, that is $O(2^n)$.
In contrast, the algorithm provided in Section \ref{sec:fob} involves a linear number of minors (and yields a polynomial time algorithm, see Corollary \ref{cor:complex-alpha}). However, the algorithm of Theorem \ref{thm:ind-10} is efficient in terms of computational complexity for building the images of all bipolar orientations of $G$ at once, see Remark \ref{rk:ind-10}. \end{remark}
\begin{remark}[building the whole bijection at once] \label{rk:ind-10}
\rm
By Corollary \ref{cor:ind-10},
the construction of Theorem \ref{thm:ind-10} can be used to build the whole active bijection for $G$ (i.e. the $1-1$ correspondence between all bipolar orientations of $G$ w.r.t. $p$ with fixed orientation, and all spanning trees of $G$ with internal activity $1$ and external activity $0$), from the whole active bijections for $G/\omega$ and $G\backslash\omega$.
For each pair of bipolar orientations $\{{\ov G}, -_\omega{\ov G}\}$, the algorithm provides which ``choice'' is right to associate one orientation with the orientation induced in $G/\omega$ and the other with the orientation induced in $G\setminus\omega$. We mention that this ``choice'' notion is developed further in \cite[Section \ref{ABG2-sec:induction}]{ABG2} (and \cite{AB4}) as the basic component for a deletion/contraction framework. This ability to build the whole bijection at once is interesting from a structural viewpoint, but also from a computational complexity viewpoint.
Precisely, with $|E|=n$, the number of calls to the algorithm to build one image is $O(2^n)$ (see Remark \ref{rk:difficult}), but the number of calls to the algorithm to build the $O(2^n)$ images of all bipolar orientations is $O(2^n\times 2^0+ 2^{n-1}\times 2^1+\ldots+2^0\times 2^n+ 2^1\times 2^{n-1})$, that is just $O(n.2^n)$.
\emevder{a decreire ent ermes de "enumeration compelxity", voir courriels Daniel} \end{remark}
\begin{remark}[linear programming] \label{rk:ind-lp}
\rm As the full optimality notion strengthens real linear programming optimality, the deletion/contraction algorithm of Theorem \ref{thm:ind-10} corresponds to a refinement of the classical linear programming solving by constraint/variable deletion, see \cite{GiLV04, AB3} (see also Remark \ref{rk:lp}). \end{remark}
\begin{remark}[equivalence in Theorem \ref{thm:ind-10}] \emevder{attention this Remark \ref{rk:ind-10-equivalence} is also repeated in the companion paper \cite{ABG2}.} \label{rk:ind-10-equivalence}
\rm
The equivalence of the two formulations in the algorithm of Theorem \ref{thm:ind-10} is a deep result, difficult to prove if no property of the computed mapping is known.
Here, to prove it, we implicitly use that $\alpha$ is already well-defined by Definition \ref{def:acyc-alpha}, and bijective for bipolar orientations (Key Theorem \ref{thm:bij-10}).
But if one defines a mapping $\alpha$ from scratch as in the algorithm (with either one of the two formulations) and then investigates its properties, then it turns out that the above equivalence result is equivalent to the existence and uniqueness of the fully optimal spanning tree as defined in Definition \ref{def:acyc-alpha}. See \cite{AB4} for precisions.
This equivalence result is also related to the active duality property (see \cite[Section \ref{ABG2-subsec:fob}]{ABG2}, see also \cite[Section 5]{AB1} or \cite[Section 5]{AB2-b}\emevder{verifier ref ection AB2b}).
Roughly, in terms of graphs, from \cite[Section 4]{GiLV05}, one defines a bijection between orientations obtained from bipolar orientations by reversing the source-sink edge and spanning trees with internal activity $0$ and external activity~$1$. The full optimality criterion of Definition \ref{def:acyc-alpha} can be directly adapted to these orientations with almost no change (see \cite[Section \ref{ABG2-subsec:fob}]{ABG2}). Then, thanks to the equivalence of the two dual formulations in the above algorithm, one can directly adapt the above algorithm for this second bijection, with no risk of inconsistency. In an oriented matroid setting, this equivalence result also means that the same algorithm can be equally used in the dual, with no risk of inconsistency.
These subtleties are detailed in \cite{AB4}.
\emevder{TOUE CETTE RK A BIEN VERIFIER, ET A VOIR AVEC - ET REPRENDRE DANS - AB4 !!!!} \emevder{cette remark a mettre dans ABG2 ou dans ABG2LP ?????} \end{remark}
\section{Direct computation by optimization} \label{sec:fob}
This section gives a direct and efficient construction of the fully optimal spanning tree of an ordered bipolar digraph, in terms of an optimization based on a linear ordering of cocycles in a minor for each edge of the resulting spanning tree.
It is completely independent of Section \ref{sec:induction}. It is an adaptation for graphs\emevder{ai enleve a simlification car AB3 est sipple, encore lus !} of a general construction by elaborations on linear programming given for oriented matroids in \cite{AB3} (see Section~\ref{sec:intro} for an outline, see also \cite{GiLV09} for a short statement in real hyperplane arrangements, note that this section could be directly generalized to regular matroids or totally unimodular matrices, and see Remark \ref{rk:lp} for more information on these linear programming aspects). By this way, the computation of the fully optimal spanning tree can be made in polynomial time (Corollary \ref{cor:complex-alpha}).
Also, we insist that we do not give here a new proof of the key Theorem \ref{thm:bij-10}:
we use this result to assume that the fully optimal spanning tree exists, and then prove that our algorithm necessarily builds it.
\begin{lemma}
\label{lem:ordering-validity} Let $G$ be a graph. Given a spanning tree $T$ of $G$ and $U\subseteq T$, there exists at most one cocycle $C$ of $G$ such that $C\cap T=U$. Given a spanning tree $T$ of $G$ and two cocycles $C$ and $D$ of $G$, there exists $f\in T$ belonging to $C\Delta D$. \end{lemma}
\begin{proof} Let us prove the first claim. Assume such a cocycle $C$ exists. Then it is defined by a partition of the set of vertices of $G$ into two sets $A$ and $B$, such that $G[A]$ and $G[B]$ are connected (since $C$ is minimal for inclusion). Each connected component of $T\setminus U$ is contained either in $G[A]$ or in $G[B]$. On the other hand, if an edge $e=(x,y)$ is in $U$ then it is in $C$ and has an extremity in $G[A]$ and the other in $G[B]$. Hence, as $T$ is spanning, the partition into the sets $A$ and $B$ is completely determined by $T$ and $U$, which implies the uniqueness of $C$. Now, let us prove the second claim. Assume $T\cap (C\Delta D)=\emptyset$, then we have $T\cap C\subseteq D$ and $T\cap D\subseteq C$, so $T\cap C=T\cap D= T\cap C\cap D$, which is a contradiction with the first claim for $U=T\cap C=T\cap D$. \end{proof}
\begin{definition} \label{def:optimizable-digraph} \rm
We call \emph{optimizable digraph} a (connected) digraph ${\ov G}=(V,E\cup F)$, with possibly $E\cap F\not=\emptyset$,
given with:
\noindent$\bullet$ an edge $p\in E\setminus F$, called \emph{infinity edge},
\noindent$\bullet$ a subset of edges $E$, called \emph{ground set}, such that the digraph ${\ov G}(E)$ is acyclic,
\noindent$\bullet$ a subset of edges $F$, called \emph{objective set}, linearly ordered, such that $F\cup\{p\}$ is a spanning tree~of~$G$.
Given an optimizable digraph ${\ov G}$ as defined above, we define a \emph{linear ordering for the signed cocycles of ${\ov G}$ containing $p$} as follows.
Let $C$ and $D$ be two signed cocycles (see Section \ref{subsec:prelim-tools}) of ${\ov G}$ containing $p$.
By Lemma \ref{lem:ordering-validity}, since $F\cup\{p\}$ is a spanning tree of $G$, there exists an element of $F$ that belongs to $C \Delta D$.
Let $f$ be the smallest element of $F$ such that either $f$ belongs to $C \Delta D$, or $f$ belongs to $C\cap D$ and has opposite signs in $C$ and $D$. \emevder{j'y comprend plus rien, comment c'est possible que f ait signe differents dans C et D ? comme C et D contiennt p, c'est possible seulement si f et p coupent la ligne C-D au meme point, ce qui n'a pas d'interet...???!! voire preuve de complexite proposition plus loin, on ne se sert que de $f\in C\triangle D$... A VERIFIER !!!!! (ptet un pb de signe de p?)}
If $f$ is positive in $C$ or negative in $D$ then set $C>D$. If $f$ is negative in $C$ or positive in $D$ then set $D>C$.
Equivalently, we set $C>D$ if $f$ is a positive element of $C$ and not an element of $D$, or $f$ is a positive element of $C$ and a negative element of $D$, or $f$ is not an element of $C$ and a negative element of $D$, where $f$ is the smallest possible element of $F$ that allows for setting $C<D$ or $D<C$ by this way.
Equivalently, it is easy to see that this ordering is provided by the weight function $w$ on signed cocycles of $G$ defined as follows. Denote $F=f_2<...<f_r$, and denote $f_i$ resp. $\overbar f_i$ the element with a positive resp. negative sign. For $1\leq i\leq r$, set $w(f_i)=2^{r-i}$ and $w(\overbar f_i)=-2^{r-i}$, and set $w(e)=0$ if $e\in E\setminus F$. Then define the weight $w(C)$ of a signed cocycle $C$ as the sum of weights of its elements. The above linear ordering is given by:
$C>D$ if $w(C)>w(D)$.
\eme{ai change ordre dans def ci-dessus, et maximalite de optimal, et une occurence de > dans preuve, noramelment aps de probleme}
We define \emph{the optimal cocycle} of ${\ov G}$ as the
maximal signed cocycle of ${\ov G}$ containing $p$ with positive sign, and inducing a directed cocycle of ${\ov G}(E)$. It exists since ${\ov G}(E)$ is acyclic, and it is unique since the ordering is linear.
\end{definition}
\begin{thm} \label{th:fob} Let ${\ov G}=(V,E)$ be an ordered bipolar digraph with respect to $p=\mathrm{min}(E)$. The fully optimal spanning tree $\alpha({\ov G})=p<t_2<...<t_r$ is computed by the following algorithm.
\begin{algorithme} \parindent= 1cm \noindent(1) Initialize ${\ov G}$ as the optimizable digraph
given by:
\hskip 2cm \vbox{\hsize=13cm
\noindent$\bullet$ the infinity edge $p$
\noindent$\bullet$ the ground set $E$
\noindent$\bullet$ the objective set $F=f_2<...<f_r$ where $p<f_2<...<f_r$ is the smallest lexicographic spanning tree of ${\ov G}$ (and the linear ordering on $F$ is induced by the linear ordering on $E$).
}
\noindent(2) For $i$ from 2 to $r$ do:
(2.1) Let $C_{\hbox{\small opt}}$ be the optimal cocycle of ${\ov G}$.
\noindent{\small \it Scholia: $C_{\hbox{\small opt}}$ is actually the cocycle induced in the current digraph ${\ov G}$ by the fundamental cocycle of the element $t_{i-1}$ (computed at the previous step, $t_1=p$) of the fully optimal spanning tree of the initial digraph.} \emevder{peut-tre faire un vrai enonce pour ca, attention c'est dit pluieur fois, dans exemple aussi et dns preuve de observation a la fin aussi... et dans preuve du thm propriete P2... c'est important} \emevder{+++ en fait serait ptet bien d'observer avant que cnnaiter bse interne = connaitre fundamental cocycles}
(2.2) Let $$t_i=\mathrm{min} (E\setminus C_{\hbox{\small opt}})$$
\noindent{\small \it Scholia: the $i$-th edge $t_i$ of $\alpha({\ov G})$ is actually the smallest edge not contained in fundamental cocycles of smaller edges of $\alpha({\ov G})$ because the fully optimal spanning tree $\alpha({\ov G})$ is internal uniactive.}
(2.3) Let $$p'=t_i$$
(2.4) Let $$E'=E\setminus C_{\hbox{\small opt}}$$
(2.5) \vtop{\hsize=13cm \noindent If $p'\in F$ then let $F'=F\setminus\{p'\}$, and if $p'\not\in F$ then let: }
$$ F'= F\setminus \mathrm{max} \bigl(\ F\cap C_{G} (F \cup \{p\};p')\ \bigr).$$
\hskip 1cm \vtop{\hsize=14cm \noindent
Equivalently, $F'$ is obtained by removing from $F$ the greatest possible element such that the following property holds:
\it $p'\not\in F'$ and $F'\cup\{p'\}$ is a spanning tree of the minor ${\ov G}'$ defined below. }
(2.6) Set
$${\ov G}'= {\ov G}\ (\ E'\ \cup\ F'\ \cup\ \{p\}\ )\ /\ \{p\}$$
\hskip 1cm as the optimizable digraph
given by:
\hskip 2cm \vbox{\hsize=12cm
\noindent$\bullet$ the infinity edge $p'$
\noindent$\bullet$ the ground set $E'$
\noindent$\bullet$ the objective set $F'$
}
(2.7) Update ${\ov G}:={\ov G}'; \ p:=p'; \ E:=E'; \ F:=F'.$
\noindent(3) Output $$\alpha({\ov G})=p<t_2<...<t_r.$$ \end{algorithme} \end{thm}
\begin{figure}\label{fig:fob-algo}
\end{figure}
\begin{figure}\label{fig:fob-output}
\end{figure}
\begin{example} \label{example} \rm Theorem \ref{th:fob} might seem rather technical in a pure graph setting, so let us first illustrate it on an example before proving it (see Remark \ref{rk:lp} or \cite{GiLV09, AB3} for geometrical interpretations). Let us apply the algorithm of Theorem \ref{th:fob} to the same example as in \cite{GiLV05}, where it illustrated the inverse algorithm. The steps of the algorithm are depicted on Figure \ref{fig:fob-algo}. The output is depicted on Figure~\ref{fig:fob-output}. One can keep in mind that the successive optimal cocycles built in the algorithm correspond to the fundamental cocycles with respect to the successive edges of the fully optimal spanning tree (cf. scholia above, see details in property (P2$_i$) in the proof of Theorem \ref{th:fob} below).\emevder{deja dit dans scholia !!!}
{ \parindent=1cm
\noindent {\it --- Computation of $t_2$.}
Initially ${\ov G}={\ov G}_1$ is a digraph with set of edges $E= \{1<2<3<4<5<6<7<8\}$.
The minimal spanning tree is $\{1<2<3<6\}$.
Hence $p = 1$ and
$F = 236$.
The linearly ordered directed cocycles of ${\ov G}_1$ containing $p=1$ are:
$123\ > \ 1246\ > \ 1358\ > \ 14568\ > \ 1457$.
The maximal is $C_{\hbox{\small opt}}= 123$ (which is equal, at this first step, to the smallest for the lexicographic ordering, see Observation \ref{obs:first-cocycle} below).
We get $t_2=4$.
\noindent {\it --- Computation of $t_3$.}
We now consider the optimizable digraph ${\ov G}_2={\ov G}_1 / \{1\} \backslash \{3\}$
given with
$p = 4$,
$E = 45678$,
and $F = 26$
(the edge $3$ is deleted as the greatest belonging to the circuit $134$ and to the previous $F=236$).
The linearly ordered signed cocycles of ${\ov G}_2$ positive of $E$ and containing $p=4$ are:
$46\ >\ \overline{2}457 $
(where the bar upon elements represents negative elements).
\noindent The maximal is $C_{\hbox{\small opt}}=46$.
We get $t_3=5$.
\noindent {\it --- Computation of $t_4$.}
We now consider the optimizable digraph ${\ov G}_3 = {\ov G}_2 / \{4\} \backslash \{ 2\}$
given with
$p=5$,
$E=578$, and $F=6$
(the edge $2$ is deleted as the greatest belonging to the circuit $25$ and to the previous $F=26$).
The linearly ordered signed cocycles of ${\ov G}_3$ positive of $E$ and containing $p=5$ are:
$58\ >\ 5\overline{6}7$.
The maximal is $C_{\hbox{\small opt}}=58$.
We get $t_4=7$.
\noindent {\it --- Output.} We get finally $\alpha({\ov G})=1457$ (and one can check that the fundamental cocycles $C^*(1457;1)=123$, $C^*(1457;4)=\overline{3}46$, and $C^*(1457;5)=\overline{2}58$ induce the successive optimal cocycles $123$, $46$ and $58$ in the successive considered minors). }
\end{example}
\begin{paragraph}{\textit{\textbf{Notations for what follows}}} The proof of Theorem \ref{th:fob} is given below, after two lemmas. In all this framework, we will use the following notations.
We denote ${\ov G}$, ${\ov G}'$, etc., the variables as they are used during the algorithm, meaning they are considered at any given step of the algorithm with their current value. We denote ${\ov G}_1$ the initial optimizable digraph ${\ov G}$, and ${\ov G}_i$ the variable optimizable digraph ${\ov G}$ updated at step (2.7) w.r.t. variable $i$, with parameters $p_i=t_i$ as $p$, $E_i$ as $E$ and $F_i$ as $F$, for all $1\leq i\leq r$. \end{paragraph}
\begin{lemma} \label{lem:algo-fob-well-defined} The algorithm of Theorem \ref{th:fob} is well-defined. \end{lemma}
\begin{proof}
Initially, the optimizable digraph ${\ov G}={\ov G}_1$ is obviously well defined. One has to check that the optimizable digraph ${\ov G}'$ defined at each call to step (2.6) is well defined.
First, by induction hypothesis, ${\ov G}$ is connected, and $C_{\hbox{\small opt}}$ is a cocycle of ${\ov G}$, that is an inclusion-minimal cut. We have $p\in C_{\hbox{\small opt}}$ by definition of $C_{\hbox{\small opt}}$, and we have $E'=E\setminus C_{\hbox{\small opt}}$ by step (2.4), hence $p$ is the unique edge joining the two connected components of ${\ov G}(E'\cup \{p\})$. Hence ${\ov G}(E'\cup \{p\})/\{p\}$ is connected, and so is ${\ov G}'$.
By definitions at steps (2.2)(2.3)(2.4), we have $p'=\mathrm{min}(E')$, and by definition of $F'$ at step (2.5), we have $p'\not\in F$, hence $p'\in E'\setminus F'$, as required for an optimizable digraph.
Since $F\cup\{p\}$ is a spanning tree of ${\ov G}$, then $F$ is a spanning tree of ${\ov G}'$, and, by definition of $F'$ at step (2.5), $F'\cup\{p'\}$ is a spanning tree of ${\ov G}'$, as required for an optimizable digraph.
By assumption, at each call to step (2.6), the digraph ${\ov G}(E)$ is acyclic. Since $C_{\hbox{\small opt}}$ is a cocycle of ${\ov G}$ with $p\in C_{\hbox{\small opt}}$ as above, then ${\ov G}(E'\cup \{p\})$ is also acyclic, and so is ${\ov G}'(E')$, as required for an optimizable digraph. \end{proof}
\begin{lemma} \label{lem:algo-fob-invariant}
At any step of the algorithm of Theorem \ref{th:fob},
the following invariant is maintained: the smallest element of a cocycle of $G$ belongs to $F\cup\{p\}$, and this element is equal to the smallest element of the cocycle of $G_1$ inducing this cocycle (in the minor $G$ of $G_1$, see Section \ref{subsec:prelim-tools}). \end{lemma}
\begin{proof} The property is true at the initial step since $F\cup\{p\}=F_1\cup\{p_1\}$ is defined as the smallest spanning tree of $G=G_1$. Assume it is true for $G$, we want it true also for $G'$ as defined at step (2.6). Let $D'$ be a cocycle of $G'$. By construction of $G'=G\backslash A/\{p\}$ for some set $A$ such that $G'$ is connected, there exists a cocycle $D$ of $G$ inducing $D'$, such that $p\not\in D$ and $D\setminus A=D'$ (see Section \ref{subsec:prelim-tools}).
Since $F\cap A=F\setminus (E'\cup F' \cup \{p\})$ by definition of $G'$, and $F\setminus F'$ contains exactly one element $f$ by definition of $F'$ at step (2.5), then $F\cap A$ contains at most this element $f$.
By induction hypothesis, we have $\mathrm{min}(D)\in F$. By definition of $D$, we have $D\setminus A=D'$.
Assume for a contradiction that $\mathrm{min}(D')\not= \mathrm{min}(D)$.
Then $\mathrm{min}(D)\in A\cap F$, implying $F\cap A=\{f\}$ and $\mathrm{min}(D)=f$. There are two cases for defining $f$ at step (2.5). In the first case, we have $f=p'=\mathrm{min}(E')\in F$, which implies $f\in E'$ and which contradicts $\{f\}=F\cap A=F\setminus (E'\cup F' \cup \{p\})$. In the second case, $f$ is defined as the greatest element of the unique cycle $C$ of $G$ contained in $F\cup\{p,p'\}$. In this case, assume an element $f'$ belongs to $D\cap C$. We have $f'\leq f$ since $f'\in C$ and the greatest element of $C$ if $f$. And we have $f'\geq f$ since $f'\in D$ and $\mathrm{min}(D)=f$. Hence $f'=f$, and we have proved $D\cap C=\{f\}$,
which contradicts the orthogonality of the cycle $C$ and the cocycle $D$ (see Section \ref{subsec:prelim-tools}).
So we have $\mathrm{min}(D')= \mathrm{min}(D)$. Since the cocycle $D_1$ of $G_1$ inducing $D$ in $G$ also induces $D'$ in $G'$, and since $\mathrm{min}(D)=\mathrm{min}(D_1)$ by induction hypothesis, we get $\mathrm{min}(D')=\mathrm{min}(D_1)$.
Finally, if $\mathrm{min}(D')\not\in F'$, since $\mathrm{min}(D')=\mathrm{min}(D)\in F$, then $\mathrm{min}(D')=f$. As above, there are two cases for defining $f$ at step (2.5). In the first case, we have $f=p'=\mathrm{min}(E')$, hence $\mathrm{min}(D')=f\in F'\cup\{p'\}$, which is the property that we have to prove. In the second case, we have $\mathrm{min}(D)=f$ and the same argument as above leads again to $C\cap D=\{f\}$ and to the same contradiction.
The invariant is now proved.
\end{proof}
\emevder{IMPORTANT attention : voir ou est-ce qu'on utilise que G bipolar au depart !!!! en fait c'est utilis? par le fait qu'il existe $T$ avec ses proprietes !!! vu vite fait avant soumission, mais a revoir apres redaction de AB3 !!! cf. phrase au debut dans la preuve --- OK C'EST BON JE CROIS}
\begin{proof}[Proof of Theorem \ref{th:fob}] The present proof is a condensed but complete version for graphs
of the general geometrical proof that will be given in \cite{AB3}, taking benefit of the linear ordering of cocycles defined above (which is possible in graphs but not in general, see Remark \ref{rk:lp}).
We will extensively use the notion of cocycle induced in a minor of a graph by a cocycle of this graph, see Section \ref{subsec:prelim-tools}.
Since ${\ov G}_1$ is bipolar, its fully optimal spanning tree $\alpha({\ov G}_1)$ exists
and satisfies the properties recalled in Section \ref{subsec:prelim-fob}. By Lemma \ref{lem:algo-fob-well-defined}, the algorithm given in the theorem statement is well defined.
Now, we have to prove that $\alpha({\ov G}_1)$ is necessarily equal to the output of this algorithm.
The proof consists in proving by induction that, for every $2\leq i\leq r$, the two following properties (P1$_i$) and (P2$_i$) are true. The property (P1$_i$) for $2\leq i\leq r$ means that the algorithm actually returns $\alpha({\ov G}_1)$. The property (P2$_i$) for $2\leq i\leq r$ means that the optimal cocycles computed in the successive minors are actually induced by the fundamental cocycles of the fully optimal spanning tree $\alpha({\ov G}_1)$, as noted in the first scholia in the algorithm statement. Then, the proof is not long but rather technical. Let us denote $T=\alpha({\ov G}_1)=\{b_1<\ldots<b_r\}$.
\begin{itemize} \partopsep=0mm \topsep=0mm \parsep=0mm \itemsep=0mm \item \noindent (P1$_i$) {\it We have $b_i=t_i$, where $t_i$ is defined at step (2.2) of the algorithm and $b_i$ is the $i$-th element of $T=\alpha({\ov G}_1)$.}
\item \noindent (P2$_i$) {\it The optimal cocycle $C_{\hbox{\small opt}}$ of ${\ov G}_{i-1}$, defined at step (2.1) of the algorithm equals the cocycle denoted $C_{i-1}$ of ${\ov G}_{i-1}$ induced by the fundamental cocycle $C^*(T;b_{i-1})$ of $p_{i-1}=t_{i-1}=b_{i-1}$ w.r.t. $T$ in~${\ov G}_1$, that is: $C_{i-1}= C^*(T;b_{i-1}) \cap (E_{i-1}\cup F_{i-1})$, where $(E_{i-1}\cup F_{i-1})$ is the edge set of ${\ov G}_{i-1}$. } \end{itemize}
\eme{bien relire cette preuve}
First, observe that $C_{i-1}$ is a well defined induced cocycle in property (P2$_i$) as soon as (P1$_j$) is true for all $j<i-1$ (implying $t_j=b_j$), since $b_j\not\in C^*(T;b_{i-1})$ for $j<i-1$ and ${\ov G}_{i-1}={\ov G}_1/\{p_1,t_2,...,t_{i-2}\}\setminus A$ for some $A$.
Second, let $1\leq k\leq r$. Assume that, the property (P2$_i$) is true for all $2\leq i\leq k$ and the property (P1$_i$) is true for all $2\leq i\leq k-1$. Then we directly have that the property (P1$_i$) is also true for $i=k$. Indeed, as shown in \cite[Proposition 2]{GiLV05} (that can be proved easily), the fact that the spanning tree $T=b_1<b_2<...<b_r$
has external activity $0$
implies that, for all $1\leq i\leq r$, we have $b_i=\mathrm{min} \bigl(E\setminus \cup_{j\leq i-1} C^*(T;b_j)\bigr)$. So we have $b_{k}=\mathrm{min}(E_{k-1} \setminus C_{k-1})=\dots=\mathrm{min}(E_1\setminus \cup_{j\leq k-1} C_j)=\mathrm{min}(E_1\setminus \cup_{j\leq k-1} C^*(T;b_j))$, and so we have $t_{k}=b_{k}$.
Now, it remains to prove that, under the above assumption, the property (P2$_i$) is true for $i=k+1$.
Consider $C_{\hbox{\small opt}}$, denoting the optimal cocycle of ${\ov G}_k$, and $C_k$, denoting the cocycle of ${\ov G}_{k}$ induced by the fundamental cocycle of $p_k=t_{k}=b_{k}$ w.r.t. $T$ in~${\ov G}_1$.
Assume for a contradiction that $C_{\hbox{\small opt}}\not=C_k$.
Since properties (P1$_i$) and (P2$_i$) are true for all $2\leq i\leq k$ by assumption, and reformulating definition of ${\ov G}'$ at step (2.6), we have:
\begin{eqnarray*} {\ov G}_k&=&{\ov G}_{k-1}/\{b_{k-1}\}\setminus \Bigl(C_{k-1}\setminus (F_k \cup \{b_{k-1}\})\Bigr)\\ &=&{\ov G}_{k-1}/\{b_{k-1}\}\setminus \Bigl(C^*(T;b_{k-1})\setminus (F_k \cup \{b_{k-1}\})\Bigr) \end{eqnarray*}
and hence, inductively, we have: $${\ov G}_k={\ov G}_1/\{b_1,...,b_{k-1}\}\backslash A$$ where $A$ is the union of all fundamental cocycles of $b_i$ w.r.t. $T$ for $1\leq i\leq k-1$ minus $F_k \cup \{b_1,...,b_{k-1}\}$. That is: $$A=\Bigl(\ \bigcup_{1\leq i\leq k-1}C^*(T;b_i)\ \Bigr) \ \ \setminus\ \ \Bigl(\ F_k \cup \{b_1,...,b_{k-1}\}\ \Bigr).$$ In particular, we have $C_k=C^*(T;b_k)\setminus A$. And we also have $A\cap T=\emptyset$.
As recalled in Section \ref{subsec:prelim-fob},
the definition of $T$ implies that $C_k$ is positive on $E_k$, that is positive except maybe on elements of $F_k\setminus E_k$, just the same as $C_{\hbox{\small opt}}$.
By assumption and property (P1), we have $b_k=t_k=\mathrm{min}(E_k)$. Then, by definition of $C_k= C^*(T;b_{k}) \cap (E_{k}\cup F_{k})$, we have $b_k\in C_k$. Hence, the cocycle $C_k$ has been taken into account in the linear ordering of cocycles of the optimizable digraph ${\ov G}_k$ defining $C_{\hbox{\small opt}}$. So $C_{\hbox{\small opt}}>C_k$ in this linear ordering by definition of $C_{\hbox{\small opt}}$.
By definition of this linear ordering, let $f$ be the smallest element of $F_k$ with the property of being positive in $C_{\hbox{\small opt}}$ and not belonging to $C_k$, or positive in $C_{\hbox{\small opt}}$ and negative in $C_k$, or not belonging to $C_{\hbox{\small opt}}$ and negative in $C_k$. The edge $f$ does not have opposite signs in $C_{\hbox{\small opt}}$ and $-C_k$. So let $D'$ be a cocycle of ${\ov G}_k$ obtained by elimination between $C_{\hbox{\small opt}}$ and $-C_k$ preserving $f$ (see Section \ref{subsec:prelim-tools}). Necessarily, $f$ is positive in $D'$ by definition of $f$. By definitions of $C_{\hbox{\small opt}}$ and $C_k$, the element $b_k$ is positive in $C_{\hbox{\small opt}}$ and in $C_k$. Moreover, every edge in $F$ smaller than $f$ belonging to $C_{\hbox{\small opt}}$ resp. $C_k$ also belongs to $C_k$ resp. $C_{\hbox{\small opt}}$ with the same sign, by definition of $f$. Hence, by properties of elimination, the cocycle $D'$ does not contain $b_k$ nor any element of $F$ smaller than $f$ belonging to $C_{\hbox{\small opt}}$ or $C_k$. Since the smallest edge in $D'$ belongs to $F\cup\{b_k\}$, as shown by the invariant of Lemma \ref{lem:algo-fob-invariant}, and since we have $b_k\not\in D'$, then we have $\mathrm{min}(D')=f$.
The negative elements of $D'$ are either elements of $F_k\setminus E_k$, or elements of $C_k\setminus \{b_k\}$. In the first case, since $E_k=E_1\setminus A$, the negative elements belong to $F_k\cap A\subseteq A\subseteq E_1\setminus T$. In the second case, the negative elements also belong to $E_1\setminus T$
by definition of a fundamental cocycle.
Finally, let $D$ be the cocycle of ${\ov G}_1$ inducing $D'$ in ${\ov G}_k$, such that $D\cap \{b_1,...,b_k\}=\emptyset$ and $D\setminus A=D'$. The negative elements of $D$ belong to $A$ or are negative in $D'$, then, in every case, they belong to $E_1\setminus T$. Moreover, as shown by the invariant of Lemma \ref{lem:algo-fob-invariant}, we have $\mathrm{min}(D)=\mathrm{min}(D')=f$.
So we have built a cocycle $D$ such that:
(i) $D\cap \{b_1,...,b_k\}=\emptyset$
(ii) $\mathrm{min}(D)=f$
(iii) $f$ is positive in $D$
(iv) the negative elements of $D$ are in $E_1\setminus T$
In a first case, we assume that $f\not\in T$. Then $f=c_j$ for some $c_j\in E_1\setminus T$. Let $C=C(T;c_1)\circ ...\circ C(T;c_j)$. As recalled above, this composition of cycles has only positive elements except the first one $p_1=b_1$. The edge $f$ is positive in $C$ and $D$, hence, by orthogonality (see Section \ref{subsec:prelim-tools}), there exists an edge $e\in C\cap D$ with opposite signs in $C$ and $D$. We have $b_1\not\in D$, hence $e\not=b_1$, and hence $e$ is positive in $C$ and negative in $D$. Hence $e\in E_1\setminus T$. Since $e\in C$, we must have $e=c_i$ for some $i\leq j$, that is $e \leq f$.
But $f=\mathrm{min}(D)$ implies $e=f$, which is a contradiction.
In a second case, we assume that $f\in T$. Let $a=\mathrm{min} \bigl(C^*(T;f)\bigr)$. Since $f\in F$, we have $f\not=\mathrm{min}(E_1)$, and so $f\not= a$ by definition of $T$, so we have $a\not\in T$. Then $a=c_j$ for some $c_j\in E_1\setminus T$. Let $C=C(T;c_1)\circ ...\circ C(T;c_j)$, which has only positive elements except the first one $p_1=b_1$. Since $a\in C^*(T;f)$, we have $f\in C(T;c_j)$. As above, by orthogonality, the edge $f$ positive in $C$ and $D$, together with $p_1\not\in D$, implies an edge $e\in C\cap D$ positive in $C$ and negative in $D$. So $e\in E_1\setminus T$, and so $e=c_i$ for some $i\leq j$, so $e\leq a$, which implies $e\leq f$ by definition of $a$, leading to the same contradiction as above with $f=\mathrm{min}(D)$. \end{proof}
\emevder{mettre en corollaire/lemme/scolie la pty P2$_i$ selon laqulle les cocircuits optimaux sont les cocircuits fondamentaux de base ?}
\begin{remark}[linear programming] \label{rk:lp} \rm
The algorithm of Theorem \ref{th:fob} actually consists in a translation and an adaptation in the case of graphs of a geometrical algorithm providing in general a strengthening of pseudo/real linear programming (in oriented matroids / real hyperplane arrangements). In this setting, we optimize a sequence of nested faces (the successive optimal cocycles in the above algorithm, a process that we call flag programming), each with respect to a sequence of objective functions (the linearly ordered objective set in the above algorithm, a process that we call multiobjective programming), yielding a unique fully optimal basis for any bounded region. This refines standard linear programming where just one vertex is optimized with respect to just one objective function, but this can be computed inductively using standard pseudo/real linear programming. See \cite{AB3} for details (see also \cite{GiLV09} for a short presentation and formulation of the general algorithm in the real case; see also \cite{AB1} for the primary relations between full optimality and linear programming; see also \cite{GiLV04} in the simple case of the general position/uniform oriented matroids where the first fundamental cocircuit determines the basis; see also Section \ref{sec:intro} for complementary information and references notably on duality properties; see also Remark \ref{rk:ind-lp} in relation with deletion/contraction).
Let us relate that to the present paper. The flag programming can be addressed inductively by means of a sequence of multiobjective programming in minors. This induction is rather similar in the graph case, yielding the successive minors addressed in the above algorithm of Theorem \ref{th:fob}. The multiobjective programming can be addressed inductively by means of a sequence of standard linear programming in lower dimensional regions. In the graph case, this induction can be avoided and transformed into the linear ordering of cocycles by means of a suitable weight function (Definition \ref{def:optimizable-digraph}), yielding a unique optimal cocycle. This simplification is possible because $U_{2,4}$ is an excluded minor (as for binary matroids), which yields very special properties for the skeleton of regions. Hence, the construction of this section could be readily extended to binary oriented matroids (that is regular oriented matroids, or totally unimodular matrices). See the proof of Proposition \ref{prop:complex-optim} for technical explanations.
From the computational complexity viewpoint, we get from this approach that the optimal cocycle, and hence the fully optimal spanning tree, can be computed in polynomial time, a result which we state in Proposition \ref{prop:complex-optim} and Corollary \ref{cor:complex-alpha} below. However, this complexity bound is achieved here by using numerical methods for standard real linear programming. An interesting question is to achieve this bound by staying at the combinatorial level of the graph, see Remark~\ref{rk:graph-complexity}. \end{remark}
\emevder{explication pour reveriwer, simplifiee ici: Roughly: a first part of the linear programming type general algorithm consists in multiobjective programming where one optimal vertex is found in the region w.r.t. successive objective functions. When $U_{2,4}$ is excluded (which is equivalent to being regular when orientable), it implies that no four vertices can be supported on a same line (on a same a dim 1 subspace), which implies that when two vertices are supported on the same line in the same region, then this line crosses the infinity at a third point which is the unique remaining vertex of the arrangment on the line. This implies that the optimization can be replaced by a weight function on the vertices, that is on the cocircuits. By this way, all the multiobjective stuff can be replaced by the linear ordering on cocircuits provided by a weight functions, which simplifies drastically this first part of the algorithm.
So I added in the paper : `` by using the fact that $U_{2,4}$ is an excluded minor (its geometrical consequences in terms of region skeleton)'' but I am not sure that it is much better. }
\begin{prop} \label{prop:complex-optim} The optimal cocycle of an optimizable digraph can be computed in polynomial time. \end{prop}
\begin{proof} \emevder{VITE FAITE JUSTE AVAT RESOUMISSION, A RELIRE BIEN !!! VOIRE A COMPARER AVEC AB3 oua reprednre dans AB3!} This proof is based on geometry and linear programming. \emevder{ai enleve ca:, and we present it in a fast way.} It can be seen as a special case of a general construction in terms of multiobjective programming detailed in \cite{AB3}, see Remark \ref{rk:lp}. We assume that the reader is familiar with the classical representation of a directed graph as a signed real hyperplane arrangement (which is usual in terms of oriented matroids,
see for instance \cite[Chapter 1]{OM99}). We thus freely switch between graphical and geometrical terminologies.
Let us consider an optimizable digraph ${\ov G}=(V,E\cup F)$ as in Definition \ref{def:optimizable-digraph}. Each edge of ${\ov G}$ is identified to a hyperplane in a real vectorial space providing a positive halfspace delimited by this hyperplane. Since ${\ov G}(E)$ is acyclic, the intersection of positive halfspaces of hyperplanes in $E$ is a region $R$ of the space.
Concisely, the region defined by $E$ is where the optimization takes place, the hyperplanes in $F$ are the kernels of linear forms whose values are to be optimized (intuitively increasing from the negative halfspace to the positive halfspace), and the hyperplane $p$ is considered as a hyperplane at infinity.
Precisely, let us now proceed to the affine hyperplane arrangement induced by intersections of hyperplanes with an hyperplane parallel to $p$ on the positive side of $p$. In this affine representation, the vertices (0-dimensional faces) correspond to cocycles of $G$ containing $p$, and the region $R$ induces a region which we denote $R$ again, and the vertices of the skeleton of $R$ correspond to the directed cocycles of ${\ov G}(E)$ containing $p$.
Let us consider two adjacent vertices $v_C$ and $v_D$ in the skeleton of $R$.
Let $L$ be the line formed by $v_C$ and $v_D$.
Let $f\in F$ be an affine hyperplane which is not parallel to the line $L$, and let $v_f$ be the intersection of $L$ and $f$.
Since the initial hyperplane arrangement has been built from a graph, the uniform matroid $U_{2,4}$ is an excluded minor of the underlying matroid,
which implies that a line in the considered affine hyperplane arrangement has at most two intersections with hyperplanes. Hence, among the three vertices $v_C$, $v_D$ and $v_f$ of $L$, at least two of them are equal, hence $v_f=v_C$ or $v_f=v_D$, which implies $f\in C\triangle D$. Note that this is where we use the fact the oriented matroid is graphical, and this is why the construction of this paper directly generalizes to regular matroids.\emevder{see \cite{AB3} ?????}\emevder{citer rk numerotee?}
Now let us apply the definition of the ordering of cocycles of the optimizable digraph ${\ov G}$.
Assume that $f$ is the smallest element of $F$ belonging to $C\triangle D$. \emevder{remark that $f$ cannot belong to $C\cap D$, hence this is actually useless in the definition when consdiering cocycles containing $p$ !!! a VOIR !!!} We have $C>D$ if $f$ is positive in $C$ (hence with $f\not\in D$) or negative in $D$ (hence with $f\not\in C$). In any case, the ordering (either $C>D$ or $D<C$) can be seen as indicating if $v_C$ has either a bigger or a smaller value than $v_D$ with respect to a linear form defined by $f$. So, in the same manner, as shown by the weight function defining the ordering, the optimal cocycle $C_{\hbox{\small opt}}$ of ${\ov G}$ can be obtained the following way, denoting $F=f_2<...<f_r$: first find the optimal vertices in the region $R$ with respect to a linear form defined by $f_2$ (they form a face parallel to $f_2$), second, among these vertices, find the optimal vertices with respect to a linear form defined by $f_3$ (they form a face parallel to $f_2\cap f_3$), and so on, until finding the uniquely determined cocycle $C_{\hbox{\small opt}}$. Note that this multiobjective programming is part of the general construction addressed in \cite{AB3}, which we could translate here into a linear ordering of cocycles thanks to the special geometry of a graphical arrangement.
Finding an optimal vertex with respect to a linear form can be done in polynomial time using numerical methods of real linear programming. Hence finding the $r-1$ successive sets of optimal vertices, until the unique final one corresponding to $C_{\hbox{\small opt}}$, can also be done in polynomial time. \end{proof}
\begin{cor} \label{cor:complex-alpha} The fully optimal spanning tree of an ordered directed graph which is bipolar w.r.t. its smallest edge can be computed in polynomial time. \end{cor}
\begin{proof} The algorithm of Theorem \ref{th:fob} consists in finding the optimal cocycle of $r-1$ successive optimizable digraphs,
built as simple minors of the initial digraph. So, we apply Proposition~\ref{prop:complex-optim}.
\end{proof}
\emevder{detailler number of minors, on the contrary with the algorithm of Theorem \ref{thm:ind-10} ???}
\begin{remark}[computational complexity using a pure graph setting] \label{rk:graph-complexity} \rm An interesting question is to prove Proposition \ref{prop:complex-optim} without using a numerical linear programming method, that is to build the optimal cocycle of an optimizable digraph in polynomial time while staying at the graph level.
Let us give an answer which is available for the first computed optimal cocycle in Theorem \ref{th:fob}, which is the fundamental cocycle of $p=\mathrm{min}(E)$ w.r.t. the fully optimal spanning tree of ${\ov G}$. For the initial optimizable digraph ${\ov G}$, the ground set is the whole edge set $E$, hence the optimal cocycle $C_{\hbox{\small opt}}$ is a directed cocycle of ${\ov G}(E)={\ov G}$. In this case, no negative element has to be taken into account when defining $C_{\hbox{\small opt}}$ from the weight function of Definition \ref{def:optimizable-digraph}. Actually, one can thus also deduce a stronger property for this first $C_{\hbox{\small opt}}$ (using the fact that the objective set is built from the smallest lexicographic spanning tree), see Observation \ref{obs:first-cocycle}.
What is important is that, in this case, $C_{\hbox{\small opt}}$ turns out to be the directed cocycle of maximal weight in a bipolar acyclic digraph ${\ov G}$ for a certain weight function on (undirected) edges.
In such a situation, in order to build such a $C_{\hbox{\small opt}}$ cocycle, one can use the celebrated \emph{Max-flow-Min-cut Theorem} of digraphs. We refer the reader to \cite{BaGu02} for details
(see also for instance \cite{Sc03} on the problem of finding a minimum directed cut in other terms). \emevder{voir aussi 8.4.28 page 251 de Groetschel Lovasz Schriver 1983 sur minimum weighted dicut ???} Roughly, start with the acyclic digraph with weights on edges, and add all opposite edges with infinite weights. By this theorem, computing a minimal cut is equivalent to computing a maximum flow, hence it can be done in polynomial time (and it is even simpler in our case where there is only one adjacent source and sink). Since the resulting minimum cut has a finite weight, then it necessarily corresponds to a directed cocycle of the initial digraph (removing edges with an infinite weight), that is to $C_{\hbox{\small opt}}$. \emevder{je n'ai rien compris a ce que j'ai ecrit, c'est d'apres note vite faite avec stephane, pas plus claire...}
However, this construction can not be directly applied to compute the optimal cocycle of a general optimizable digraph (nor to compute the next fundamental cocycles of the fully optimal spanning tree), since
the optimal cocycle is not in general a directed cocycle of the optimizable digraph (only of its restriction to the ground set $E$) and weights of edges in $F$ may have to be counted negatively depending on their direction. We leave this open question for further research. \end{remark}
\begin{observation} \label{obs:first-cocycle} Let ${\ov G}=(V,E)$ be an ordered bipolar digraph with respect to $p=\mathrm{min}(E)$. The fundamental cocycle of $p$ w.r.t. the fully optimal spanning tree of ${\ov G}$, that is $C^*(\alpha({\ov G});p)$, is actually the smallest lexicographic directed cocycle of ${\ov G}$. We leave the details (see Remark \ref{rk:graph-complexity}).
\emevder{a mettre ou pas ? si ou a prouver meiux !}
\end{observation}
\emevder{dessous dans source tentaive de preuve, reperete rk plus haut... plus laisse tomber... A FAIRE, je ne suis plus bien sur de ca !!!!!!}
\noindent{\bf Acknowledgments.} \noindent Emeric Gioan wishes to thank J{\o}rgen Bang-Jensen and St\'ephane Bessy for communicating reference \cite{BaGu02} and how to use the \emph{Max-flow-Min-cut Theorem} in Remark~\ref{rk:graph-complexity}.
\emevder{verifier refs mise a jour + faire fichier .bib ?}
\input{ABGraphs2LP.bbl}
\end{document}
|
arXiv
|
Notice that \[35\cdot40=1400.\]Find some integer $n$ with $0\leq n<1399$ such that $n$ is the multiplicative inverse to 160 modulo 1399.
Taking the given equation modulo 1399 gives \[35\cdot40\equiv1\pmod{1399},\]so we know that 35 is the multiplicative inverse to 40. We want to use this to find the multiplicative inverse to $4\cdot40=160$, so we want to try to "divide" 35 by 4.
The difficulty in dividing by 4 is that 35 is odd. We do know, though, that \[35\equiv35+1399\equiv1434\pmod{1399}\]and this number is even! Let's go even further, though, to find a multiple of 4: \[35\equiv35+3\cdot1399\equiv4232\pmod{1399}.\]Factoring 4 we get \[35\equiv4\cdot1058\pmod{1399}.\]Finally we multiply by 40: \[1\equiv 40\cdot35\equiv40\cdot4\cdot1058\equiv160\cdot1058\pmod{1399}.\]This argument is inelegant. Let's write it in a more clear order: \begin{align*}
1058\cdot160&\equiv1058\cdot(4\cdot40)\\
&\equiv(1058\cdot4)\cdot40\\
&\equiv35\cdot40\\
&\equiv1\pmod{1399}.
\end{align*}The multiplicative inverse to 160 modulo 1399 is $\boxed{1058}$.
|
Math Dataset
|
\begin{document}
\title{Families of point sets with identical 1D persistence}
\author[1]{Philip Smith} \author[1]{Vitaliy Kurlin (corresponding author)}
\affil[1]{Computer Science, University of Liverpool, Liverpool, L69 3BX, UK, [email protected], [email protected]}
\maketitle
\begin{abstract} Persistent homology is a popular and useful tool for analysing point sets, revealing features of a point set that can be used to highlight key information, distinguish point sets and as an input into machine learning pipelines. The famous stability theorem of persistent homology provides an upper bound for the change in persistence under perturbations, but it does not provide a lower bound. This paper clarifies the possible limitations persistent homology may have in distinguishing point sets, which is clearly evident for point sets that have trivial persistence. We describe large families of point sets that have identical or even trivial one-dimensional persistence. The results motivate stronger invariants to distinguish point sets up to isometry. \end{abstract}
\section{Introduction: persistence is an isometry invariant} \label{sec:intro}
Topological Data Analysis (TDA) was pioneered by Claudia Landi \cite{frosini1999size} (initially called size theory) and Herbert Edelsbrunner et al. \cite{edelsbrunner2000topological}. The important papers by Gunnar Carlsson \cite{carlsson2009topology}, Robert Ghrist \cite{ghrist2008barcodes} and Shmuel Weinberger \cite{weinberger2011persistent} were followed by substantial developments by Fred Chazal \cite{chazal2016structure} and many others. Persistent homology is a key tool of TDA \cite{EdHa08} and is invariant up to isometry (transformations which maintain inter-point distances).
The famous stability theorem \cite{CEH05} states that under bounded noise, the bottleneck distance between persistence diagrams of a point set and its perturbation has an upper bound dependent on the magnitude of the perturbation. As such, a small perturbation of a point set results in at most a small change in its corresponding persistent homology.
However, there is no lower bound, which means that a perturbation of a point set can result in the corresponding persistent homology remaining unchanged. This is an issue for applications that require invariants to reliably distinguish point sets up to isometry or similar equivalence relations such as rigid motion or uniform scaling. A uniform scaling also scales persistence, but any non-uniform scaling or a more general continuous deformation of data changes persistence rather arbitrarily. Hence any persistence-based approach analyses data only up to isometry, which is an important equivalence due to the rigidity of many real-life structures, see Fig.~\ref{fig:dominoes}.
\begin{figure}
\caption{ Many non-isometric sets whose points form $1\times 2$ dominoes have the same 0D persistence (determined by edge-lengths of a Minimum Spanning Tree in green) and trivial 1D persistence for the filtrations of Vietoris-Rips, Cech and Delaunay complexes. Theorem~\ref{thm:tail} extends these examples to a large open subspace of point sets by adding `tails' at red corners. }
\label{fig:dominoes}
\end{figure}
The above facts motivate a comparison of persistent homology with other isometry invariants of point sets. One key application is for rigid periodic crystals whose isometry classification was recently initiated by Herbert Edelsbrunner et al.\cite{edelsbrunner2021density}. A periodic crystal is naturally modeled by a periodic set of points representing atomic centres.
This paper describes how a point sequence of arbitrary size can be added to any finite point set whilst leaving the one-dimensional persistent homology unchanged, and goes on to define a large continuous family of point sets that have trivial persistence. We focus on one-dimensional persistent homology of point sets in any dimension, computed using filtrations of simplicial complexes including Vietoris-Rips, Cech and Delaunay complexes.
Our main Theorem~\ref{thm:tail} and Corollary~\ref{cor:PD=0} build large open spaces of point sets that all have identical 1D persistence. With similar aims, Curry et al. described spaces of Morse functions on the interval \cite{curry2018fiber} and the sphere $S^2$ \cite{catanzaro2020moduli} that have identical persistence. The experiments use the Vietoris-Rips filtration whose persistence is implemented by Ripser\cite{Bau21}. Higher dimensional persistent homology will be included in future updates.
\begin{figure}
\caption{ The initial set $A$ of 10 points in the centre is extended by four tails going out from red points. All such sets have trivial 1D persistence by Corollary~\ref{cor:PD=0}, but all such sets in general position are not isometric to each other. The black edges form a Minimum Spanning Tree of the set. }
\label{fig:set+tail}
\end{figure}
Section~\ref{sec:persistence} introduces definitions and proves auxiliary lemmas needed for our main Theorem~\ref{thm:tail}, which describes how, given a finite point set, we can add an arbitrarily large point set without affecting the one-dimensional persistent homology. Section~\ref{sec:experiments} summarises large-scale experiments that reveal interesting information on the prevalence, or more likely lack, of significant persistent features occurring in randomly generated point sets.
\section{Three classes of edges important for 1D persistence} \label{sec:edges}
This section introduces three classes of edges (short, medium and long) that will help build point sets with identical 1D persistence. Since persistent homology can be defined for any filtration of simplicial complexes on an abstract finite set $A$, the most general settings are recalled in Definition~\ref{dfn:filtration}. Definition~\ref{dfn:complexes} describes the more explicit filtrations of Vietoris-Rips, \v{C}ech{ }and Delaunay complexes on a finite set $A$ in any metric space $M$ (or just in $\mathbb R^N$ for Delaunay complexes).
\begin{definition}[Filtration of complexes $\{C(A;\alpha)\}$] \label{dfn:filtration} Let $A$ be any abstract finite set.
\noindent \textbf{(a)} A (simplicial) \emph{complex} $C$ on $A$ is a finite collection of subsets $\sigma\subset A$ (called \emph{simplices}) such that all subsets of $\sigma$ and any intersection of simplices are also simplices of $C$.
\noindent \textbf{(b)} The \emph{dimension} of a simplex $\sigma$ consisting of $k+1$ points is $k$. We assume that all points of $A$ are 0-dimensional simplices, sometimes called \emph{vertices} of $C$. A 1-dimensional simplex (or \emph{edge}) $e$ between points $p,q\in A$ is the unordered pair denoted as $[p,q]$.
\noindent \textbf{(c)} An (ascending) \emph{filtration} $\{C(A;\alpha)\}$ is any family of simplicial complexes on the vertex set $A$, paremeterised by a \emph{scale} $\alpha\geq 0$ such that $C(A;\alpha')\subseteq C(A;\alpha)$ for all $\alpha'\leq\alpha$.
$\blacksquare$ \end{definition}
Let $M$ be any metric space with a distance $d$ satisfying all metric axioms. For any points $p,q\in A\subset M$, the edge $e=[p,q]$ has length $|e|=d(p,q)$. An example of a metric space is $\mathbb R^N$ with the Euclidean metric. If $A\subset\mathbb R^N$, the edge $e=[p,q]$ can be geometrically interpreted as the straight-line segment connecting the points $p,q\in A\subset\mathbb R^N$.
Definition~\ref{dfn:complexes} introduces the simplicial complexes $\mathrm{VR}(A;\alpha)$ and $\mathrm{\check{C}ech}(A;\alpha)$ on any finite set $A$ inside an ambient metric space $M$, although $A=M$ is possible. For a point $p\in A$ and $\alpha\geq 0$, let $\bar B(p;\alpha)\subset M$ denote the closed ball with centre $p$ and radius $\alpha$.
A Delaunay complex $\mathrm{Del}(A;\alpha)\subset\mathbb R^N$ will be defined for a finite set $A\subset\mathbb R^N$ because of extra complications arising if a point set $A$ lives in a more general metric space \cite{boissonnat2018obstruction}.
\begin{definition}[Complexes $\mathrm{VR}(A;\alpha),\mathrm{\check{C}ech}(A;\alpha),\mathrm{Del}(A;\alpha)$] \label{dfn:complexes} Let $A\subset M$ be any finite set of points. Fix a \emph{scale} $\alpha\geq 0$. Each complex $C(A;\alpha)$ below has vertex set $A$.
\noindent \textbf{(a)} The \emph{Vietoris-Rips} complex $\mathrm{VR}(A;\alpha)$ has all simplices on points $p_1,\dots,p_k\in A$ whose pairwise distances are all at most $2\alpha$, so $d(p_i,p_j)\leq 2\alpha$ for all distinct $i,j\in\{1,\dots,k\}$.
\noindent \textbf{(b)} The \emph{\v{C}ech} complex $\mathrm{\check{C}ech}(A;\alpha)$ has all simplices on points $p_1,\dots,p_k\in A$ such that the full intersection $\cap_{i=1}^k \bar B(p_i;\alpha)$ is not empty.
\noindent \textbf{(c)} For any finite set of points $A\subset\mathbb R^N$, the convex hull of $A$ is the intersection of all closed half-spaces of $\mathbb R^N$ containing $A$. Each point $p_i\in A$ has the \emph{Voronoi domain}
$$V(p_i)=\{q\in\mathbb R^n\mid |q-p_i|\leq |q-p_j|\text{ for any point } p_j\in A-\{p_i\}\}.$$ The Delaunay complex $\mathrm{Del}(A;\alpha)$ has all simplices on points $p_1,\dots,p_k\in A$ such that the intersection $\cap_{i=1}^k (V(p_i)\cap\bar B(p_i;\alpha))$ is not empty \cite{delaunay1934sphere}. Alternatively, a simplex $\sigma$ on points $p_1,\dots,p_k\in A$ is a \emph{Delaunay} simplex if: (a) the smallest $(k-2)$-dimensional sphere $S^{k-2}$ passing through $p_1,\dots,p_k$ has a radius at most $\alpha$; (b) there is also an $(N-1)$-dimensional sphere $S^{N-1}$ passing through $p_1,\dots,p_k$ that does not enclose any points of $A$.
In a degenerate case, the smallest $(k-2)$-dimensional sphere $S^{k-2}$ above can contain more than $k$ points of $A$. If $\sigma$ is enlarged to the convex hull $H$ of all points $A\cap S^{k-2}$, then $\mathrm{Del}(A;\alpha)$ becomes a polyhedral \emph{Delaunay mosaic} \cite{biswas2022continuous}. For simplicity, we choose any triangulation of $H$ into Delaunay simplices. Then a Delaunay complex $\mathrm{Del}(A;\alpha)\subset\mathbb R^N$ is a subset of a Delaunay triangulation of the convex hull of $A$, which is unique in general position.
\noindent The complexes of the three types above will be called \emph{geometric complexes} for brevity.
$\blacksquare$ \end{definition}
Both complexes $\mathrm{VR}(A;\alpha)$ and $\mathrm{\check{C}ech}(A;\alpha)$ are abstract and so are not embedded in $\mathbb R^N$, even if $A\subset\mathbb R^N$. Though $\mathrm{Del}(A;\alpha)$ is embedded into $\mathbb R^N$, its construction is fast enough only in dimensions $N=2,3$. For high dimensions $N>3$ or any metric space $M$, the simplest complex to build and store is $\mathrm{VR}(A;\alpha)$. Indeed, $\mathrm{VR}(A;\alpha)$ is a flag complex determined by its 1-dimensional skeleton $\mathrm{VR}^1(A;\alpha)$ so that any simplex of $\mathrm{VR}(A;\alpha)$ is built on a complete subgraph whose vertices are pairwise connected by edges in $\mathrm{VR}^1(A;\alpha)$.
The key idea of TDA is to view any finite set $A\subset\mathbb R^N$ through lenses of a variable scale $\alpha\geq 0$. When $\alpha$ is increasing from the initial value 0, the points of $A$ become blurred to balls of radius $\alpha$ and may start forming topological shapes that `persist' over long intervals of $\alpha$. More formally, for any fixed $\alpha\geq 0$, the union $\cup_{p\in A}\bar B(p;\alpha)$ of closed balls is homotopy equivalent to the \v{C}ech{ } complex $\mathrm{\check{C}ech}(A;\alpha)$ and also to the Delaunay complex $\mathrm{Del}(A;\alpha)\subset\mathbb R^N$ by the Nerve Lemma \cite[Corollary 4G.3]{Hat01}.
For any geometric complex $C(A;\alpha)$ from Definition~\ref{dfn:complexes}, all connected components of $C(A;\alpha)$ are in a 1-1 correspondence with all connected components of the union $\cup_{p\in A}\bar B(p;\alpha)$. Any edge $e$ enters $C(A;\alpha)$ when $\alpha$ equals the edge's half-length $|e|/2$.
\begin{definition}[Short, medium, long edges in a filtration] \label{dfn:edges} Let $\{C(A;\alpha)\}$ be any filtration of complexes on an abstract finite vertex set $A$, see Definition~\ref{dfn:filtration}. Let an edge $e=[p,q]$ between points $p,q\in A$ enter the simplicial complex $C(A;\alpha)$ at scale $\alpha$.
\noindent \textbf{(a)} Consider the 1-dimensional graph $C'(A;\alpha)$ with vertex set $A$ and all edges from $C(A;\alpha)$ except the edge $e$. If the endpoints of $e$ are in different connected components of $C'(A;\alpha)$, then the edge $e$ is called \emph{short} in the filtration $\{C(A;\alpha)\}$.
\noindent \textbf{(b)} The edge $e$ is called \emph{long} in $\{C(A;\alpha)\}$ if $A$ has a vertex $v$ such that the 2-simplex $\triangle pqv$ is contained in $C(A;\alpha)$ and both edges $[p,v],[v,q]$ are in $C(A;\alpha')$ for some $\alpha'<\alpha$.
\noindent \textbf{(c)} If $e$ is neither short nor long, then the edge $e$ is called \emph{medium} in $\{C(A;\alpha)\}$.
$\blacksquare$ \end{definition}
Definition~\ref{dfn:edges}(b) implies that any long edge enters $C(A;\alpha)$ with a 2-simplex $\triangle pqv$ at the same scale $\alpha$ and the boundary of this 2-simplex is homologically trivial in $C(A;\alpha)$ due to the other two edges $[p,v],[v,q]$ that entered the filtration at a smaller scale $\alpha'<\alpha$.
Classes of edges in Definition~\ref{dfn:edges} were introduced only in terms of an abstract filtration of complexes. Lemma~\ref{lem:long_edges} interprets long edges in VR and Cech filtrations via distances.
\begin{lemma}[Long edges in $\mathrm{VR}$ and $\mathrm{\check{C}ech}$] \label{lem:long_edges} Let $A$ be a finite set in a metric space.
\noindent \textbf{(a)} In the Vietoris-Rips filtration $\{\mathrm{VR}(A;\alpha)\}$, an edge $e=[p,q]$ is long if and only if the set $A$ has a point $v$ such that $e=[p,q]$ is a strictly longest edge in the triangle $\triangle pqv$.
\noindent \textbf{(b)} In the \v{C}ech{ }filtration $\{\mathrm{\check{C}ech}(A;\alpha)\}$, an edge $e$ is long if and only if $\mathrm{\check{C}ech}(A;\alpha)$ includes a triangle $\triangle pqv$ such that the edge $e=[p,q]$ is strictly longest in $\triangle pqv$ and the triple intersection $\bar B(p;\alpha)\cap\bar B(q;\alpha)\cap\bar B(v;\alpha)$ is not empty for $\alpha=d(p,q)/2$.
$\blacksquare$ \end{lemma} \begin{proof}
For both filtrations, an edge $e$ enters $C(A;\alpha)$ when $\alpha=|e|/2$. By Definition~\ref{dfn:edges}(b), a long edge enters $C(A;\alpha)$ together with a 2-simplex $\triangle pqv$. Since the other two edges $[p,v],[v,q]$ entered the filtration at a smaller scale, the edge $e=[p,q]$ is longest in $\triangle pqv$. For the \v{C}ech{ }filtration, the triple intersection $\bar B(p;\alpha)\cap\bar B(q;\alpha)\cap\bar B(v;\alpha)$ should be non-empty to guarantee that $\mathrm{\check{C}ech}(A;\alpha)$ includes the 2-simplex $\triangle pqv$, see Definition~\ref{dfn:complexes}(b). \end{proof}
\begin{example}
For any 3-point set $A\subset\mathbb R^N$, let the edges of $A$ have lengths $|e_1|\leq |e_2|<|e_3|$. By Definition~\ref{dfn:edges}, in $\{\mathrm{VR}(A;\alpha)\}$ the edge $e_3$ is long whilst the edges $e_1,e_2$ are short.
If $|e_1|<|e_2|=|e_3|$, then the edge $e_1$ is short but both edges $e_2,e_3$ are medium, not long. If $|e_1|=|e_2|=|e_3|$, then all three edges are medium.
Let $C(A;\alpha)$ be any geometric complex from Definition~\ref{dfn:complexes} on a finite set $A\subset\mathbb R^2$. If $A$ consists of four vertices of the unit square, all square sides are medium whilst both diagonals are long. If $A$ consists of four vertices of a rectangle that is not a square, the two shorter sides are short, the longer sides are medium and both diagonals are long.
$\blacksquare$ \end{example}
\begin{proposition}[Three classes of edges] \label{prop:classes_edges} For any finite set $A$ and a filtration $\{C(A;\alpha)\}$ from Definition~\ref{dfn:complexes}, all edges are split into three disjoint classes: short, medium, long.
$\blacksquare$ \end{proposition} \begin{proof} By Definition~\ref{dfn:edges}(b), the endpoints $p,q$ of any long edge $e=[p,q]\subset C(A;\alpha)$ are connected by a chain of two edges $[p,v]\cup[v,q]$ that entered the filtration at a smaller scale $\alpha'<\alpha$. Hence the long edge $e$ cannot be short by Definition~\ref{dfn:edges}(a). Hence the three classes of edges in Definition~\ref{dfn:edges} are disjoint. \end{proof}
Lemmas~\ref{lem:circumdisk} and~\ref{lem:Del_long_edges} will interpret long edges in a Delaunay complex in terms of angles.
\begin{lemma}[Circumdisk of a triangle] \label{lem:circumdisk} Let two triangles $\triangle pqu$, $\triangle pqv$ $\subset\mathbb R^2$ lie on the same side of a common edge $[p,q]$. If $\angle puq<\angle pvq$, for example, if $\angle puq$ is acute and $\angle pvq$ is non-acute, then the open circumdisk of $\triangle pqu$ contains $v$, see Fig.~\ref{fig:Delaunay_circumdisk}~(left). \end{lemma} \begin{proof} Let the infinite ray from $p$ via $v$ meet the circumcircle $C$ of $\triangle pqu$ at a point $w$. We have equal angles $\angle puq=\angle pwq$ whose sine is $\dfrac{d(p,q)}{2R}$, where $R$ is the radius of $C$. Since $\angle puq=\angle pwq<\angle pvq$, the point $v$ is inside the edge $[p,w]$, hence enclosed by $C$. \end{proof}
\begin{figure}
\caption{ \textbf{Left}: the circumdisk of $\triangle pqu$ contains $v\in A$, see Lemma~\ref{lem:circumdisk} and Lemma~\ref{lem:Del_long_edges} for $N=2$ when $[p,q]$ is on the boundary of the convex hull of $A$. \textbf{Middle}: a proof of Lemma~\ref{lem:Del_long_edges} for $N=2$ when $[p,q]$ is inside the convex hull of $A$. \textbf{Right}: the circumdisk $D_t$ of $\triangle pqu_t$ contains $v\in A$, leading to a contradiction in the proof of Lemma~\ref{lem:Del_long_edges} for $N\geq 3$.}
\label{fig:Delaunay_circumdisk}
\end{figure}
\begin{lemma}[Long edges in $\mathrm{Del}(A;\alpha)$] \label{lem:Del_long_edges} Let $\alpha\geq 0$ and $A\subset\mathbb R^N$ be a finite set. An edge $e=[p,q]$ in the Delaunay complex $\mathrm{Del}(A;\alpha)$ is long by Definition~\ref{dfn:edges}(b) if and only if
\noindent \textbf{(a)} $\mathrm{Del}(A;\alpha)$ includes a triangle $\triangle pqv$ whose angle at $v$ is non-acute or, equivalently,
\noindent \textbf{(b)} the set $A$ has a point $v$ whose angle in the triangle $\triangle pqv$ is non-acute.
$\blacksquare$ \end{lemma} \begin{proof}
\textbf{(a)} By Definition~\ref{dfn:edges}(b) the edge $e$ is long if the Delaunay complex $\mathrm{Del}(A;\alpha)$ includes a triangle $\triangle pqv$ whose edge $e=[p,q]$ is strictly longest (hence the opposite angle at $v$ is strictly largest) and $\bar B(p;\alpha)\cap\bar B(q;\alpha)\cap\bar B(v;\alpha)\neq\emptyset$ for $\alpha=d(p,q)/2$. Since the intersection $\bar B(p;\alpha)\cap\bar B(q;\alpha)$ is the mid-point $u$ of the edge $e$, the triple intersection above is non-empty if and only if $d(u,v)\leq\alpha$. Equivalently, the circumcentre of $\triangle pqv$ lies non-strictly outside the triangle $\triangle pqv$ or the angle at $v$ in $\triangle pqv$ is non-acute.
\noindent \textbf{(b)} Due to part (a), it suffices to prove that if we have any triangle $\triangle pqv$ with a non-acute angle at $v$, we can find such a triangle within $\mathrm{Del}(A;\alpha)$. Assume the contrary that all triangles in $\mathrm{Del}(A;\alpha)$ containing $[p,q]$ have only acute angles opposite to $[p,q]$.
For $N=2$, the edge $[p,q]$ can have one or two Delaunay triangles whose edge $[p,q]$ is on the boundary or inside the convex hull of $A$, see the first two pictures of Fig.~\ref{fig:Delaunay_circumdisk}. In both cases by Lemma~\ref{lem:circumdisk}, the above point $v$ with a non-acute angle opposite to $[p,q]$ should be inside the circumdisk of one of these triangles $\triangle pqu$ having an acute angle at the vertex $u$ opposite to $[p,q]$. Then the triangle $\triangle pqu$ cannot be in $\mathrm{Del}(A;\alpha)$ by Definition~\ref{dfn:complexes}(c).
For $N\geq 3$, consider all $(N-1)$-dimensional Delaunay simplices containing the edge $[p,q]$. The above point $v$ lies (non-strictly) between a pair of successive $(N-1)$-dimensional subspaces spanned by two such simplices $\sigma_0,\sigma_1$ with common edge $[p,q]$. Let $D^N$ be the circumball of the $N$-dimensional simplex $\sigma\in\mathrm{Del}(A;\alpha)$ with faces $\sigma_0,\sigma_1$.
Choose a 1-parameter family of 2-dimensional planes $P_t$, $t\in[0,1]$, rotating around $[p,q]$ from $\sigma_0$ to $\sigma_1$ so that $\triangle pqu_0=P_0\cap\sigma\subset\sigma_0$ and $\triangle pqu_1=P_1\cap\sigma\subset\sigma_1$ are Delaunay triangles, while one intermediate plane $P_t$ contains $\triangle pqv$ with a non-acute angle opposite to $[p,q]$, see Fig.~\ref{fig:Delaunay_circumdisk}~(right). By the assumption, both $\triangle pqu_0,\triangle pqu_1\in\mathrm{Del}(A;\alpha)$ have acute angles opposite to $[p,q]$. The circumdisk $D_t$ of each $\triangle pqu_t=P_t\cap\sigma$ has radius $R_t=\sqrt{R^2-d_t^2}$, where $R$ is the radius of $D^N$ and $d_t$ is the distance from the centre $O$ of $D^N$ to $P_t$. Then $R_t$ varies from $R_0$ to $R_1$ over $t\in[0,1]$, possibly with a maximum corresponding to the plane $P_t$ passing through $O$, so
$R_t\geq\min\{R_0,R_1\}$ for $t\in[0,1]$.
By the sine theorem in each $\triangle pqu_t$, the angle opposite to $[p,q]$ has $\sin\angle pu_t q=\dfrac{d(p,q)}{2R_t}$. Since both $\triangle pqu_0$ and $\triangle pqu_1$ have acute angles, the lower bound $R_t\geq\min\{R_0,R_1\}$ implies that $\angle pqu_t\leq\min\{\angle pqu_0,\angle pqu_1\}$ is acute for all $t\in[0,1]$. For the intermediate plane $P_t$ containing the vertex $v$, by Lemma~\ref{lem:circumdisk} the circumdisk $D_t\subset D^N$ of $\triangle pqu_t$ should include the point $v$ because $\angle pvq$ is non-acute and $\angle pu_t q$ is acute.
We get a contradiction with Definition~\ref{dfn:complexes}(c) because the open circumball $D^N$ of the Delaunay simplex $\sigma\in\mathrm{Del}(A;\alpha)$ includes an extra point $v\in A$. \end{proof}
\section{Trivial persistence and tails without medium edges} \label{sec:tails}
As usual in TDA, we consider homology groups with coefficients in a field, say $\mathbb Z_2$.
\begin{proposition}[No medium edges $\Rightarrow$ trivial $H_1$] \label{prop:no-medium} For any filtration $\{C(A;\alpha)\}$ on a finite abstract set $A$ from Definition~\ref{dfn:filtration}, when a scale $\alpha\geq 0$ is increasing, a new homology cycle in $H_1(C(A;\alpha))$ can be created only due to a medium edge in $C(A;\alpha)$. Hence, if $\{C(A;\alpha)\}$ has no medium edges, then $H_1(C(A;\alpha))$ is trivial for $\alpha\geq 0$.
$\blacksquare$ \end{proposition} \begin{proof} When building the given complex $C(A;\alpha)$, if we add a short edge $e$, by Definition~\ref{dfn:edges}(a), the two previously disconnected components of $C^1(A;\alpha)$ containing the endpoints $p,q$ of $e$ become connected. Hence no 1-dimensional cycle in $C^1(A;\alpha)$ is created.
By Definition~\ref{dfn:edges}(b) any long edge $e=[p,q]$ enters $C(A;\alpha)$ strictly after two edges $[p,v],[v,q]$, and at the same time as the 2-simplex $\triangle pqv$.
Any closed cycle $\gamma$ including the new edge $[p,q]$ is homologically equivalent to the cycle with $[p,q]$ replaced by the 2-chain $[p,v]\cup[v,q]$. If $\gamma$ has other edges $e_1,\dots,e_k$ of the same length as $e$, the endpoints of $e_i$ are connected by the complementary path $\gamma-e_i$, so each $e_i$ cannot be short by Definition~\ref{dfn:edges}(a), $i=1,\dots,k$. If any edge $e_i$, $i=1,\dots,k$ is long by Definition~\ref{dfn:edges}(b), then $e_i$ can be similarly replaced by a 2-chain of earlier edges.
In all cases, the cycle $\gamma$ is homologically equivalent to a cycle in $C(A;\alpha')$ for a smaller $\alpha'<\alpha$. So a long edge cannot create a new class in $H_1(C(A;\alpha))$. Since only medium edges lead to non-trivial cycles, if $A$ has no medium edges, then $H_1(C(A;\alpha))$ is trivial. \end{proof}
\begin{definition}[Tail $T$ of points] \label{dfn:tail} For a fixed filtration $\{C(A;\alpha)\}$ on a finite abstract set $A$ from Definition~\ref{dfn:filtration}, a \emph{tail} $T\subset M$ is any ordered sequence $T=\{p_1,\dots,p_n\}$, where $p_1$ is the \emph{vertex} of $T$, any edge between successive points $[p_i,p_{i+1}]$ is short, and any edge between non-successive points $[p_i,p_{j}]$ is long for any $1\leq i<j\leq n$.
$\blacksquare$ \end{definition}
\begin{proposition}[Tails have trivial $\mathrm{PD}_1$] \label{prop:tail} Any tail $T$ from Definition~\ref{dfn:tail} for a filtration $\{C(T;\alpha)\}$ of complexes from Definition~\ref{dfn:filtration} has
trivial 1D persistence. \end{proposition} \begin{proof} Since any tail $T$ has no medium edges by Proposition~\ref{prop:classes_edges}, the tail $T$ has trivial $H_1(C(T;\alpha))$ for any $\alpha\geq 0$ by Proposition~\ref{prop:no-medium}, hence trivial 1D persistence. \end{proof}
If vectors are not explicitly specified, all edges and straight lines are unoriented. We measure the angle between unoriented straight lines as their minimum angle within $[0,\frac{\pi}{2}]$.
\begin{definition}[Angular deviation $\omega(T;R)$ from a ray $R$ in $\mathbb R^N$] \label{dfn:ang_deviation} In $\mathbb R^N$, a \emph{ray} is any half-infinite line $R$ going from a point $v$ called the \emph{vertex} of $R$. For any sequence $T=\{p_1,\dots,p_n\}$ of ordered points in $\mathbb R^N$, the \emph{angular deviation} $\omega(T;R)$ of $T$ relative to the ray $R$ is the maximum angle $\angle(R,[p, q])\in[0,\frac{\pi}{2}]$ over all distinct points $p,q\in T$.
$\blacksquare$ \end{definition}
\begin{figure}
\caption{ A tail $T$ around a ray $R$ with vertex $v$ in $\mathbb R^2$, see Definitions~\ref{dfn:ang_deviation} and~\ref{dfn:thickness}. \textbf{Left}: all angles are not greater than the angular deviation $\omega(T;R)$. \textbf{Right}: the angular thickness $\theta(T;R)$ can be smaller than $\omega(T;R)$.}
\label{fig:ray+tail}
\end{figure}
\begin{lemma}[Tails in $\mathbb R^N$] \label{lem:tail} Let $R$ be a ray with vertex $v=p_1$ and $T=\{p_1,\dots,p_n\}\subset\mathbb R^N$ be any sequence of points with angular deviation $\omega(T;R)<\frac{\pi}{4}$.
\noindent \textbf{(a)} For any $p_i,p_j,p_k$ with $i<j<k$, the angle $\angle p_i p_j p_k$ is non-acute. The edge between the non-successive points $p_i,p_k$ is long in any filtration $\{C(T;\alpha)\}$ in Definition~\ref{dfn:complexes}.
\noindent \textbf{(b)} Any edge between successive points $p_{i-1},p_i$, $i=2,\dots,n$, is short in $\{C(T;\alpha)\}$.
\noindent Hence $T$ has no medium edges in the filtration $\{C(T;\alpha)\}$ and is a tail by Definition~\ref{dfn:tail}. \end{lemma} \begin{proof} \textbf{(a)} The condition $\omega(T;R)<\frac{\pi}{4}$ implies that all points of $T$ are ordered by their distance from the vertex $v=p_1$ to their orthogonal projections to the line through $R$.
Apply a parallel shift to the points $p_i,p_j,p_k$ so that $p_j\in R$. In the triangle $\triangle p_i p_j p_k$, the angle $\angle(\lv{p_jp_i},\lv{p_jp_k})=\pi-\angle(R,[p_j p_i])-\angle(R,[p_j p_k])\geq\pi-2\omega(T;R)>\frac{\pi}{2}$ is non-acute, hence strictly largest, due to $\omega(T;R)<\frac{\pi}{4}$. The edge $[p_i,p_k]$ is long in any filtration $\{C(T;\alpha)\}$ by Definition~\ref{dfn:edges}(b) and (for the Delaunay filtration) Lemma~\ref{lem:Del_long_edges}(b). In particular, the edge $[p_i,p_k]$ is longer than both edges $[p_i,p_j]$ and $[p_j,p_k]$.
\noindent \textbf{(b)} The points $p_{i-1},p_{i}$ remain in disjoint components of $C^1(T;\alpha)$ after adding all other edges of length $d(p_{i-1},p_i)$. Indeed, we proved above that any other edge connecting points $p_j,p_k$ for $j\leq i-1<i\leq k$ is longer than the edge $[p_{i-1},p_i]$ between successive points. \end{proof}
The angular thickness below is illustrated in Fig.~\ref{fig:ray+tail}~(right) for Theorem~\ref{thm:tail} later.
\begin{definition}[Angular thickness $\theta(T;R)$] \label{dfn:thickness} Let $R \subset \mathbb R^N$ be a ray with vertex $v$ and $T=\{p_1 = v,\dots,p_n\}$ be any finite sequence of points. The \emph{angular thickness} $\theta(T;R)$ of $T$ with respect to $R$ is the maximum angle $\angle(\vec R,\lv{p_1p_i})$ over $i=2,\dots,n$.
$\blacksquare$ \end{definition}
\section{Persistence for long wedges and with added tails} \label{sec:persistence}
\begin{definition}[Long wedges] \label{dfn:long_wedges} Let $A_1,\dots,A_k$ be finite sets sharing a common point $v$. For any filtration of complexes $\{C(\cup_{i=1}^k A_i;\alpha)\}$ from Definition~\ref{dfn:filtration}, the wedge $\cup_{i=1}^k A_i$ is called \emph{long} if any edge $[q_i,q_j]$ between distinct points $q_i\in A_i$, $q_j\in A_j$, $i\neq j$, is long in the filtration $\{C(\cup_{i=1}^k A_i;\alpha)\}$ in the sense of Definition~\ref{dfn:edges}(b).
$\blacksquare$ \end{definition}
For any filtration of simplicial complexes $\{C(A;\alpha)\}$ from Definition~\ref{dfn:filtration}, the 1D persistence diagram of this filtration is denoted by $\mathrm{PD}_1\{C(A;\alpha)\}$.
\begin{theorem}[Persistence of long wedges] \label{thm:long_wedges} For any filtration $\{C(\cup_{i=1}^k A_i;\alpha)\}$ of a long wedge from Definition~\ref{dfn:long_wedges}, the 1D homology group of the filtration at a given $\alpha$ is the direct sum: $H_1(C(\cup_{i=1}^k A_i;\alpha))=\oplus_{i=1}^k H_1(C(A_i;\alpha))$. Hence the 1D persistence diagram $\mathrm{PD}_1\{C(\cup_{i=1}^k A_i;\alpha)\}$ is the union of the 1D persistence diagrams $\mathrm{PD}_1\{C(A_i;\alpha)\}$ for $i=1,\dots,k$.
$\blacksquare$ \end{theorem} \begin{proof} The inclusions $A_i\subset\cup_{i=1}^k A_i$ induce the homomorphism of the 1D homology groups $\oplus_{i=1}^k H_1(C(A_i;\alpha))\to H_1(C(\cup_{i=1}^k A_i;\alpha))$ whose bijectivity follows below. Any long edge $e=[p,q]$ in a complex $C(\cup_{i=1}^k A_i;\alpha)$ can be replaced by a chain of two edges $[p,v]\cup[v,q]$ in $C(\cup_{i=1}^k A_i;\alpha')$ for some $\alpha'<\alpha$ due to a 2-simplex $\triangle pqv$ included into $C(\cup_{i=1}^k A_i;\alpha)$ by Definition~\ref{dfn:edges}(b). Continue applying these replacements until any cycle of edges in $C(\cup_{i=1}^k A_i;\alpha)$ becomes homologous to a sum of cycles in $C(A_i;\alpha)$, $i=1,\dots,k$. \end{proof}
\begin{theorem}[A long wedge with a tail] \label{thm:tail} Let $A\subset\mathbb R^N$ be any finite set, $v\in A$ be a point on the boundary of the convex hull of $A$, and $R$ be a ray with vertex $v$ so that $\mu(R;A)=\min\limits_{p\in A-\{v\}}\angle(\vec R,\lv{vp})>\frac{\pi}{2}$. Let $T$ be any tail with vertex $v$
for a filtration $\{C(T;\alpha)\}$, see Definition~\ref{dfn:tail}. If $\mu\geq\theta(T;R)+\frac{\pi}{2}$, then $\mathrm{PD}_1\{C(A\cup T;\alpha)\}=\mathrm{PD}_1\{C(T;\alpha)\}$,
$\blacksquare$ \end{theorem} \begin{proof}
For any points $p\in A$ and $q\in T$, we get the non-acute angle $$\angle(\lv{vp},\lv{vq})\geq \angle(\vec R,\lv{vp})-\angle(\vec R,\lv{vq}) \geq \mu-\angle(\vec R,\lv{vq})\geq\mu-\theta(T;R)\geq\frac{\pi}{2}.$$ If $p,q$ are in the same half-plane bounded by the line through $R$, the first inequality above becomes equality. Otherwise, $\angle(\lv{vp},\lv{vq})=\angle(\vec R,\lv{vp})+\angle(\vec R,\lv{vq})\geq \angle(\vec R,\lv{vp})-\angle(\vec R,\lv{vq})$.
Then the edge $[p,q]$ is long in the filtration $\{C(A\cup T;\alpha)\}$, not medium, by Definition~\ref{dfn:edges}(b). Hence $A\cup T$ is a long wedge by Definition~\ref{dfn:edges}(b). Since the tail $T$ has trivial 1D persistence by Proposition~\ref{prop:tail}, Theorem~\ref{thm:long_wedges} implies that the persistence diagrams are identical: $\mathrm{PD}_1\{C(A\cup T;\alpha)\}=\mathrm{PD}_1\{C(T;\alpha)\}$. \end{proof}
\begin{corollary}[Trivial 1D persistence] \label{cor:PD=0} If a set $A$ in a metric space $M$ has
$\mathrm{PD}_1\{C(A;\alpha)\}=\emptyset$, then any long wedge $A\cup T$ with a tail $T\subset M$ also has $\mathrm{PD}_1\{C(A\cup T;\alpha)\}=\emptyset$.
$\blacksquare$ \end{corollary} \begin{proof} Since the tail $T$ has trivial 1D persistence by Proposition~\ref{prop:tail}, Theorem~\ref{thm:long_wedges} implies that
$\mathrm{PD}_1\{C(A\cup T;\alpha)\}=\mathrm{PD}_1\{C(A;\alpha)\}=\emptyset$. \end{proof}
\section{Experiments, discussion and future work} \label{sec:experiments}
The experiments in this section increase the understanding of how regularly persistent homology reveals persistent features separated from noise. The experiments depend on two parameters, the size $n$ of a point set, and the dimension $N$ that the point set lies in. For each $n, N$ in the ranges chosen, we generate 1000 point sets of $n$ points uniformly sampled in a unit $N$-dimensional cube.
\begin{figure}
\caption{Histograms of the persistence $p=$ death$-$birth in 1000 point sets in nine configurations of the parameters $n$ and $N$. The $x$-axis is the persistence $p$, the $y$-axis is the percentage of pairs (birth,death) with the given persistence $p$. Top row: $N = 2$; middle row: $N = 5$; bottom row: $N = 8$. Left column: $n = 10$; middle column: $n = 15$; right column $n = 20$. }
\label{fig:hist}
\end{figure}
Figure~\ref{fig:hist} shows histograms of persistence (death$-$birth) of the one-dimensional features for nine configurations of the parameters: point set sizes $n=10,15,20$ and dimensions $N=2,5,8$. Each histogram highlights that the overwhelming majority of one-dimensional persistent features are skewed towards a low persistence, namely less than 10\% of the unit cube size. Geometrically, the corresponding dots (birth,death) would be close to the diagonal in a persistence diagram.
\begin{figure}
\caption{The median gap ratio of a point set with at least two 1D persistent features, as the size of the point set varies from $n = 10$ to $n = 40$ and the dimension $N$ varies from $N = 2$ to $N = 10$.}
\label{fig:gr1040}
\end{figure}
Recall that highly persistent features (birth,death) are naturally separated from others with lower persistence $p=$ death$-$birth by the widest diagonal gap in the persistence diagram, see \cite{smith2021skeletonisation}. If we order all pairs (birth,death) by their persistence $0<p_1\leq\dots\leq p_k$, the widest gap has the largest difference $p_{i+1}-p_i$ over $i=1,\dots,k-1$. This widest gap can separate several pairs (birth,death) from the rest, not necessarily just a single feature. However, the first widest gap is significant only if it can be easily distinguished from the second widest gap.
So the significance of persistence can be measured as the ratio of the first widest gap over the second widest gap. This invariant up to uniform scaling of given data is called the \emph{gap ratio}. Figure~\ref{fig:gr1040} shows the median gap ratio calculated over 1000 random point clouds in a unit cube for many dimensions $N=2,\dots,10$ and point set sizes $n=10,\dots,40$.
Figure~\ref{fig:gr1040} implies that for higher dimensions N, the median gap ratio quickly decreases to within the range [1,2] as the number $n$ of points is increasing. Hence, when a persistence diagram contains at least two pairs (birth,death) above the diagonal, it is becoming harder to separate highly persistent features from noisy artefacts close to the diagonal.
The future updates will include similar experiments for filtrations of \v{C}ech{ }complexes and Delaunay complexes. In conclusion, our main Theorem~\ref{thm:tail} describes how we can add an arbitrarily large point set to an existing point set without affecting the one-dimensional persistent homology, whilst Corollary~\ref{cor:PD=0} states how we can form a large continuous family of sets with trivial 1D persistence, implying that the bottleneck distance between persistence diagrams has no lower bound. We plan further experiments to check how well the bottleneck distance separates point clouds from their perturbations.
Other continuous isometry invariants \cite{widdowson2022average, widdowson2021pointwise} of finite and periodic point sets are complete in general position, hence distinguish almost all sets in $\mathbb R^N$. All counter-examples \cite{pozdnyakov2020incompleteness} to the completeness of past invariants were distinguished in \cite[appendix~C]{widdowson2021pointwise}. These latest invariants are based on the $k$-nearest neighbour search, a classical problem in Computer Science, which has near-linear time algorithms in the number of points \cite{elkin2021new, elkin2022paired}.
This research was supported by the £3.5M EPSRC grant `Application-driven Topological Data Analysis' (2018-2023, EP/R018472/1), the £10M Leverhulme Research Centre for Functional Materials Design (2016-2026) and the last author's Royal Academy of Engineering Fellowship `Data Science for Next Generation Engineering of Solid Crystalline Materials' (2021-2023, IF2122/186).
\end{document}
\renewcommand{\Alph{section}}{\Alph{section}} \setcounter{section}{0}
\section{A Family of Trivial 1D Persistence Point Sets} \label{sec:results_old}
In this section, we state and prove Theorem~\ref{thm:ndtriv1d}, which describes a family of point sets ranging over any dimension, whose associated Vietoris-Rips filtrations all have trivial one-dimensional persistence. We start with some initial definitions and lemmas that build towards this theorem.
\begin{definition}[Tricycle] \label{def:tri}
Let the dimension $N \geq 1$. For an $N$-dimensional simplicial complex, we define a \textit{tricycle} to be any one-dimensional cycle that is either:
\begin{enumerate}[label = \alph*)]
\item homotopy equivalent to a point or;
\item belongs to a one-dimensional homology class which also contains a cycle comprised of exactly three edges.
\end{enumerate}
\end{definition}
If a cycle $C_1$ is a tricycle that belongs to a one-dimensional homology class, since that homology class also contains a cycle $C_2$ comprised of three edges, the homology class can be killed off simply by the addition of a 2-simplex whose boundary is $C_2$.
\begin{definition}[Tricycle Filtrations] \label{def:trifiltration}
We call a filtration $\mathcal{F}$ associated to a point set $A \subset \mathbb{R}^N$ a \textit{tricycle filtration} if, when an edge is added in $\mathcal{F}$, any new cycle that is formed is a tricycle.
\end{definition}
In a tricycle filtration, any one-dimensional homology class can be represented by a cycle comprised of three edges. From this, we can deduce that any Vietoris-Rips filtration that is also a tricycle filtration has trivial one-dimensional persistence.
\begin{lemma}[Trivial 1D Persistence Vietoris-Rips Filtrations] \label{lem:triv1dvr}
Let the dimension $N \geq 2$. Let $A \subset \mathbb{R}^N$ be a point set and let $\mathcal{F}$ be the Vietoris-Rips filtration associated with $A$. If $\mathcal{F}$ is a tricycle filtration, then $\mathcal{F}$ has trivial 1D persistence.
\end{lemma}
\begin{proof}
Since $\mathcal{F}$ is a tricycle filtration, each time an edge is added to $\mathcal{F}$, any new cycle that is formed is a tricycle. As such, any new cycle $C_1$ is either homotopy equivalent to a point, and thus does not belong to a one-dimensional homology class, or $C_1$ belongs to a one-dimensional homology class which also contains a cycle $C_2$ comprised of three edges. For this latter case, we recall that, by the definition of Vietoris-Rips complexes, a 2-simplex $\{v_1, v_2, v_3\}$ is present in the complex if and only if edges (or 1-simplices) exist between all pairs of the vertices of the 2-simplex. Therefore, since $C_2$ is a cycle comprised of three edges, in the Vietoris-Rips filtration $\mathcal{F}$, a 2-simplex with $C_2$ as its boundary must also be included at this point in the filtration. Hence, any new one-dimensional homology class is immediately killed off in the filtration. Consequently, no one-dimensional feature can ever persist in $\mathcal{F}$, and so $\mathcal{F}$ has trivial one-dimensional persistence.
\end{proof}
Our aim for the rest of this section is to construct a family of point sets whose Vietoris-Rips filtrations are tricycle filtrations, and thus have trivial one-dimensional persistence by Lemma~\ref{lem:triv1dvr}. The definitions of raysets and angular neighbourhoods introduce geometric objects that will be used as we construct the family of point sets.
\begin{definition}[Rayset, $S$, and Obtuse Rayset] \label{def:rayset}
We define a \textit{ray} $R$ to be a half-infinite line going from the origin $0 \in \mathbb{R}^N$ to infinity. A \textit{rayset}, $S = \{R_1, \ldots, R_m\}$, is a set of rays $R_1, \ldots, R_m$, $m \geq 2$. The minimum angle between two rays $R_i, R_j \in S$ is denoted by $\mu$. If $\mu > \frac{\pi}{2}$, we call the rayset \textit{obtuse}.
\end{definition}
\begin{definition}[Angular Neighbourhoods, $\mathcal{AN}$] \label{def:angneighray}
For an angle $0 \leq \omega \leq \pi$ and ray $R \subset \mathbb{R}^N$, we define the \textit{angular neighbourhood of a ray}, $\mathcal{AN}(R; \omega) \subset \mathbb{R}^N$, to be the set of all points $p \in \mathbb{R}^N$ for which the vector $\overrightarrow{0p}$ forms an angle less than $\omega$ with $R$. Namely, $\mathcal{AN}(R; \omega) = \{p \in \mathbb{R}^N \, \mid \, \text{angle}(\overrightarrow{0p}, R) < \omega\}$.
We define the \textit{angular neighbourhood of a rayset}, $\mathcal{AN}(S; \omega) \subset \mathbb{R}^N$, $S = \{R_1, \ldots, R_m\} \subset \mathbb{R}^N$, to be the set of all angular neighbourhoods $\mathcal{AN}(R_i; \omega), 1 \leq i \leq m$.
\end{definition}
\begin{lemma} \label{lem:obtuse_within_ray}
Let the dimension $N \geq 2$. Let $A \subset \mathbb{R}^N$ be a point set and let $L$ be a ray. Let $\theta$ be the maximum angle that a line segment passing through any two points $p, q \in A$ forms with $L$. Namely, $\theta = \max\{\text{angle}(\overrightarrow{pq}, L)\}$. If $\theta < \frac{\pi}{4}$, then no three points in $A$ span an acute triangle.
\end{lemma}
\begin{proof}
Consider three points $p_1, p_2, p_3 \in A$. We order them according to the distance from the origin of their orthogonal projection on to $L$, and WLOG we assume that $p_2$ lies between $p_1$ and $p_3$ under this projection. We will show that the angle $\text{angle}(\overrightarrow{p_2p_1}, \overrightarrow{p_2p_3})$ is obtuse. We can apply translations and rotations so that, WLOG, $L$ is the line $(x_1, 0, \ldots, 0)$, $x_1 \in \mathbb{R}$, $p_2$ lies on $L$, and the vector $\overrightarrow{p_2p_3}$ is of the form $(x, y, 0, \ldots, 0)$, $x > y$, $x > 0$, $y \geq 0$. The vector $\overrightarrow{p_2p_1}$ must have the form $(-r, s, \ldots, t)$, $r > |s|$, $r > 0$. Since the angle between two vectors $\overrightarrow{u}, \overrightarrow{v}$ is given by the formula \begin{equation*} \text{angle}(\overrightarrow{u},\overrightarrow{v}) = \cos^{-1}\left(\frac{\overrightarrow{u} \cdot \overrightarrow{v}}{\norm{\overrightarrow{u}}\norm{\overrightarrow{v}}}\right), \end{equation*} the angle between two vectors is obtuse if and only if $\overrightarrow{u} \cdot \overrightarrow{v}$ is negative. We have that $\overrightarrow{p_2p_3} \cdot \overrightarrow{p_2p_1} = -xr + ys$, and since $xr$ is positive, $-xr + ys < 0$ if $xr > ys$. Indeed, $\lvert x\rvert > \lvert y\rvert$ and $\lvert r\rvert > \lvert s\lvert$ confirms that $xr > ys$, and so $\text{angle}(\overrightarrow{p_2p_1}, \overrightarrow{p_2p_3})$ is obtuse.
\end{proof}
Theorem~\ref{thm:ndtriv1d} introduces a family of point sets for which the associated Vietoris-Rips filtrations have trivial one-dimensional persistence. An example of a point set in this family can be seen in Figure~\ref{fig:family_example}.
\begin{theorem}[Sets with Trivial 1D Persistence] \label{thm:ndtriv1d}
For dimension $N \geq 2$, let $S = \{R_1, \ldots, R_m\} \subset \mathbb{R}^N$ be an obtuse rayset whose minimum angle between two of the rays is $\mu > \frac{\pi}{2}$. For an angle $0 \leq \omega \leq \pi$, consider a point set $A \subset \mathcal{AN}(S, \omega)$ that has $0 \in A$. For any two points $p, q$ that both lie in $A \cap \mathcal{AN}(R_i; \omega)$ for some ray $R_i \in S$, let $\theta$ be the maximum angle between $\overrightarrow{pq}$ and $R_i$. If it holds that \begin{align*} \theta &< \frac{\pi}{4},\\ \theta + \omega &< \mu - \frac{\pi}{2}, \end{align*} then the Vietoris-Rips filtration associated with the point set $A$ has trivial one-dimensional persistence.
\end{theorem}
\begin{figure}
\caption{A point set that satisfies the conditions of Theorem~\ref{thm:ndtriv1d} and Corollary~\ref{cor:1}, and thus its associated Vietoris-Rips filtration has trivial one-dimensional persistence.}
\label{fig:family_example}
\end{figure}
\begin{proof}
We prove that the Vietoris-Rips filtration $\mathcal{F}$ associated with the point set $A$ has trivial one-dimensional persistence by showing that $\mathcal{F}$ is a tricycle filtration. The result then follows from Lemma~\ref{lem:triv1dvr}. Therefore, we need to show that when an edge is added to $\mathcal{F}$ creating a new cycle $C$, this new cycle $C$ is either homotopy equivalent to a point, or $C$ belongs to a homology class which also contains a cycle comprised of exactly three edges. WLOG, we can assume that $C$ is comprised of the fewest edges of all cycles in its homology class. By forming $m$ sets $A_1, \ldots, A_m$, where $A_i = \{A \cap \mathcal{AN}(R_i; \omega)\}$, we have three cases to consider:
\textbf{Case 1: The vertices of $C$ are contained within one of the sets $A_i$.} Considering just this set $A_i$, we order points in $A_i$ according to the distance from the origin of their orthogonal projection on to the ray $R_i$. Since $\theta < \frac{\pi}{4}$, Lemma~\ref{lem:obtuse_within_ray} tells us that, for points $p_1, p_2, p_3 \in A_i$, if a point $p_2$ lies between points $p_1$ and $p_3$ in the ordering, then the triangle $\Delta p_1p_2p_3$ is obtuse with the obtuse angle at $\text{angle}(\overrightarrow{p_2p_1}, \overrightarrow{p_2p_3})$, and hence $p_1$ is closer to $p_2$ than $p_3$ in the Euclidean distance. Therefore, for points $p, q \in A_i$, if there is an edge between $p$ and $q$, there must also be an edge between $p$ and all points in between $p$ and $q$ in the ordering, and also between all these points and $q$. This is sufficient to then deduce that $C$ can have at most three edges.
\textbf{Case 2: The vertices of $C$ are contained within exactly two of the sets $A_i, A_j$.} Again, we order points within each set according to the distance from the origin of their orthogonal projection on to the corresponding ray $R_i$ or $R_j$. Since $\theta + \omega < \mu - \frac{\pi}{2}$, a point $p_1 \in A_i$ connected to a point $p_2 \in A_j$ must also be connected to all points lower in the ordering of points in $A_j$, and vice versa for $p_2$ and points in $A_i$. Moreover, if $p_1 \in A_i$ is connected to $p_2 \in A_j$, all points lower in the ordering of points in $A_j$ must also be connected with $p_2$. Therefore, if $p_1$ is connected to $p_2$, then all points between $p_1$ and the origin in the ordering of $A_i$, and all points between $p_2$ and the origin in the ordering of $A_j$, are connected to $p_2$, and all these points are also connected to $p_1$. This is sufficient to then deduce that $C$ can have at most three edges.
\textbf{Case 3: The vertices of $C$ are contained within no less than three of the sets $A_i$.} Since $\theta + \omega < \mu - \frac{\pi}{2}$, any point $p_1 \in A_i$ connected to $p_2 \in A_j$ must already be connected to the origin, and so must $p_2$. Therefore, the 2-simplex $\{0, p_1, p_2\}$ must exist in the filtration, and so the edge from $p_1$ to $p_2$ is homotopy equivalent to a point. Therefore, since any edge between points in different sets $A_i, A_j$, is homotopy equivalent to a point, $C$ is homotopy equivalent to a cycle contained within one of the sets $A_i$, which is Case 1 that has already been considered.
Therefore, $\mathcal{F}$ must be a tricycle filtration, and by Lemma~\ref{lem:triv1dvr}, $\mathcal{F}$ must have trivial one-dimensional persistence.
\end{proof}
The following corollary provides a weaker condition that guarantees that the associated Vietoris-Rips filtration of a point set has trivial one-dimensional persistence. However, the condition stated allows easier construction of such point sets, including the point set depicted in Figure~\ref{fig:family_example}
\begin{corollary} \label{cor:1}
As in the setting of Theorem~\ref{thm:ndtriv1d}, let a point set $A \subset \mathcal{AN}(S, \omega)$, where $S = \{R_1, \ldots, R_m\} \subset \mathbb{R}^N$ is an obtuse rayset with $\mu > \frac{\pi}{2}$ as the minimum angle between two rays. Let $A_i = \{A \cap \mathcal{AN}(R_i; \omega)\}$, let $d$ be the maximum distance of a point $p \in A_i$ from its orthogonal projection on to $R_i$, and let $\delta$ be the minimum distance between two points $p, q \in A_i$, after they have been orthogonally projected onto $R_i$. If we have \begin{equation*} d < \frac{\delta}{2}\tan\left(\frac{\mu}{2} - \frac{\pi}{4}\right), \end{equation*} then the Vietoris-Rips filtration associated with the point set $A$ has trivial one-dimensional persistence.
\end{corollary}
\begin{proof}
We show that $A$ satisfies the conditions of Theorem~\ref{thm:ndtriv1d}, that $\theta < \frac{\pi}{4}$ and $\theta + \omega < \mu - \frac{\pi}{2}$. We have that $\theta \leq \tan^{-1}(\frac{2d}{\delta})$ and $\omega < \tan^{-1}(\frac{d}{\delta}) < \tan^{-1}(\frac{2d}{\delta})$. Hence $\theta + \omega < 2\tan^{-1}(\frac{2d}{\delta})$. Therefore, we need $2\tan^{-1}(\frac{2d}{\delta}) < \mu - \frac{\pi}{2}$ and $\tan^{-1}(\frac{2d}{\delta}) < \frac{\pi}{4}$. Since $\mu \leq \pi$, the former inequality is a more restrictive version of the latter inequality. Rearranging this former inequality, we find that we are constrained by the inequality $d < \frac{\delta}{2}\tan(\frac{\mu}{2} - \frac{\pi}{4})$.
\end{proof}
\end{document}
|
arXiv
|
Interval chromatic number of an ordered graph
In mathematics, the interval chromatic number X<(H) of an ordered graph H is the minimum number of intervals the (linearly ordered) vertex set of H can be partitioned into so that no two vertices belonging to the same interval are adjacent in H.[1]
Difference with chromatic number
It is interesting about interval chromatic number that it is easily computable. Indeed, by a simple greedy algorithm one can efficiently find an optimal partition of the vertex set of H into X<(H) independent intervals. This is in sharp contrast with the fact that even the approximation of the usual chromatic number of graph is an NP hard task.
Let K(H) is the chromatic number of any ordered graph H. Then for any ordered graph H, X<(H) ≥ K(H).
One thing to be noted, for a particular graph H and its isomorphic graphs the chromatic number is same, but the interval chromatic number may differ. Actually it depends upon the ordering of the vertex set.
References
1. János Pach, Gabor Tardos, "Forbidden Pattern and Unit Distances",page 1-9,2005,ACM.
|
Wikipedia
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Negri bodies are viral factories with properties of liquid organelles
Jovan Nikolic ORCID: orcid.org/0000-0003-4119-70091,
Romain Le Bars ORCID: orcid.org/0000-0002-6605-391X1,
Zoé Lama1,
Nathalie Scrima1,
Cécile Lagaudrière-Gesbert1,
Yves Gaudin ORCID: orcid.org/0000-0002-0122-29541 &
Danielle Blondel1
Nature Communications volume 8, Article number: 58 (2017) Cite this article
Organelles
Virus–host interactions
Replication of Mononegavirales occurs in viral factories which form inclusions in the host-cell cytoplasm. For rabies virus, those inclusions are called Negri bodies (NBs). We report that NBs have characteristics similar to those of liquid organelles: they are spherical, they fuse to form larger structures, and they disappear upon hypotonic shock. Their liquid phase is confirmed by FRAP experiments. Live-cell imaging indicates that viral nucleocapsids are ejected from NBs and transported along microtubules to form either new virions or secondary viral factories. Coexpression of rabies virus N and P proteins results in cytoplasmic inclusions recapitulating NBs properties. This minimal system reveals that an intrinsically disordered domain and the dimerization domain of P are essential for Negri bodies-like structures formation. We suggest that formation of liquid viral factories by phase separation is common among Mononegavirales and allows specific recruitment and concentration of viral proteins but also the escape to cellular antiviral response.
Replication and assembly of many viruses occur in specialized intracellular compartments known as viral factories, viral inclusions or viroplasms1, 2. These neo-organelles formed during viral infection concentrate viral proteins, cellular factors and nucleic acids to build a platform facilitating viral replication. They might also prevent the activation of host innate immunity and restrain the access of viral machineries to cellular antiviral proteins. Such factories are widespread in the viral word and have been identified for a variety of non-related viruses.
The location and the nature of viral factories are very heterogeneous. They depend on the genome composition (DNA or RNA) and on the viral replication strategy. The first viral factories which were characterized were those formed by large DNA viruses such as the Poxviridae, the Iridoviridae and the Asfaviridae3,4,5,6. Those factories are devoid of membrane, located in close proximity to the microtubule organizing center. They recruit mitochondria, contain molecular chaperones such as HSP proteins and are surrounded by a vimentin cage. In the case of positive strand RNA viruses, viral factories are associated with rearrangements of membranes from diverse organelles (Mitochondria, ER, and so on) leading to the formation of double-membrane vesicles7,8,9. These vesicles seem to remain connected to the cytoplasm by channels which allow ribonucleotide import and product RNA export.
Several negative strand RNA viruses also induced the formation of membrane-less cytoplasmic inclusions which, in the case of rhabdoviruses10, 11 and filoviruses12, have been demonstrated to harbor several viral replication stages. In the case of rabies virus (RABV), those inclusions are called Negri bodies (NBs) and can reach several microns in diameter11, 13, 14.
RABV (Mononegavirales order, Rhabdoviridae family, Lyssavirus genus) is a neurotropic virus, which remains a substantial health concern as it causes fatal encephalitis in humans and animals and still kills > 55,000 people worldwide every year mainly in Asia and Africa. It has a negative stranded RNA genome (about 12 kb) encoding five proteins. The genome is encapsidated by the nucleoprotein (N) to form a helical nucleocapsid in which each N protomer binds to nine nucleotides15. The nucleocapsid is associated with the RNA dependent RNA polymerase (L) and its cofactor the phosphoprotein (P) to form the ribonucleoprotein (RNP) which is enwrapped by a lipid bilayer derived from a host cell membrane during the budding process. The matrix protein M and the glycoprotein G are membrane-associated proteins. M protein is located beneath the viral membrane and bridges the condensed RNP and the lipid bilayer. G protein is an integral transmembrane protein that is involved in viral entry16. The virus enters the host cell through the endocytic pathway via a low-pH-induced membrane fusion process catalyzed by G. The RNP is then released into the cytoplasm and serves as a template for transcription and replication processes that are catalyzed by the L–P polymerase complex. During transcription, a positive-stranded leader RNA and five capped and polyadenylated mRNAs are synthesized. The replication process yields nucleocapsids containing full-length antigenome-sense RNA, which in turn serve as templates for the synthesis of genome-sense RNA. Replication strictly depends upon ongoing protein synthesis to provide the N protein necessary to encapsidate nascent antigenomes and genomes. Neo-synthesized genomes either serve as templates for secondary transcription or are condensed and assembled with M proteins to allow budding of neo-synthesized virions at a cellular membrane16.
We have previously demonstrated that viral transcription and replication take place within NBs11. NBs contain all the replication machinery (L, N and P)11 together with M17 and several cellular proteins including HSP7018 and the focal adhesion kinase (FAK)19. It has been recently shown that NBs are in close proximity to stress granules (SGs)20, which are membrane-less liquid cellular organelles consisting of mRNA and protein aggregates21, 22 that form rapidly in response to a wide range of environmental cellular stresses and viral infections23.
In this report, using a recombinant RABV expressing a fluorescent P protein, we have investigated the physical nature of NBs. Live imaging and FRAP analysis demonstrated that they have liquid organelles properties. We have also characterized the role of the cytoskeleton on the dynamics of NBs and transport of RNPs. This revealed that RNPs are ejected from NBs by a cytoskeleton-independent mechanism and are further transported along microtubules. Finally, we developed a minimal system which recapitulates NBs properties and allows us to identify the P domains required for NBs formation.
Characterization of inclusions formed during RABV infection
RABV-infected BSR cells (MOI = 0.5) were fixed and permeabilized at different times post infection (p.i.), and the structures formed in the cytosol were analyzed by immunofluorescence with an anti-N antibody (Fig. 1a). At each time, the structures were counted and classified based on their size (i.e., the surface of their projection) (Figs. 1b–d). We distinguished small dots (surface < 0.26 µm2) (Fig. 1b), inclusions of intermediate size (0.26 µm2 < surface < 3.7 µm2) (Fig. 1c) and large inclusions (surface > 3.7 µm2) (Fig. 1d). At early time p.i. (Fig. 1a,c, 8 h), there is a limited number (up to 2 per cell) of spherical intracytoplasmic inclusions of intermediate size, which correspond to the first NBs formed during the infection11, 20. These structures grow with time (Fig. 1a,e) and sometimes lose their spherical aspect (Fig. 1a, 24 h). At later stages of infection (16 h and 24 h), large inclusions (1–3 per cell in most cases) are observed together with an increased number of intermediate inclusions and an explosion of the number of punctate structures (Fig. 1a–d).
Characterization of cytoplasmic inclusions in BSR cells infected by RABV. BSR cells were infected with CVS strain at a MOI of 0.5 and fixed at different times p.i. (8 h, 12 h, 16 h, 24 h). a Confocal analysis was performed after staining with a mouse monoclonal anti-N antibody followed by incubation with Alexa-488 donkey anti-mouse IgG. DAPI was used to stain the nuclei. Scale bars correspond to 15 µm. b–d Quantification of cytoplasmic structures labeled with anti-N antibody. At the indicated time p.i. the number of small dots (surface < 0.26 µm2) b, of intermediate inclusions (surface between 0.26 and 3.7 µm2) c, and of large inclusions (surface > 3.7 µm2) d per cell was quantified using the image toolbox of MatLab software as described in the experimental procedures. *p < 0.02; **p < 0.01 (n = 20, two tailed Mann Whitney U test). e Average surface of the largest inclusion in the cell at the indicated time p.i. Surfaces of inclusions were determined using the image toolbox of MatLab software as described in the experimental procedures. The mean is shown with error bars representing the SD. ***p < 10−4 (n = 20, two tailed Welch's t-test). f Confocal analysis revealing the basal localization of small dots and the median localization of inclusions in RABV infected cells at 24 h p.i. The analysis was performed after staining as in a. Scale bars correspond to 15 µm. g, h EM characterization of the ultrastructural aspects of BSR cells infected by RABV. Basal sections g reveal the presence of RNPs whereas median sections h reveal the presence of NBs displaying an electron dense granular structure (colored in blue in the bottom row) which lose their spherical shape when they associate with membranes at 24 h p.i
The cellular location of inclusions and punctate structures was analyzed by confocal microscopy. At 24 h p.i., punctate structures are mainly detected in the basal section, whereas inclusions (intermediate and big ones) are located in median sections of the cell (Fig. 1f).
This was confirmed by electron microscopy performed on fixed sample (Fig. 1g,h). Cytoplasmic inclusions are rather observed in median section (Fig. 1h). They are initially highly spherical and devoid of membranes (Fig. 1h, 8 h). At later stage (Fig. 1h, 24 h), they are often associated with membranes—most probably derived from the ER11 – and their shape appears to be altered. At that time, basal section of the cells reveals the presence of typical condensed RNPs (Fig. 1g). Therefore, the small punctate structures observed by immunofluorescence most probably correspond to condensed RNPs. The inclusions, either large or intermediate, will be referred as NBs thereafter.
NBs are liquid structures
To gain insight into the dynamic of NBs inside the cytoplasm, live cell imaging was performed. For this, BSR cells were infected with the previously described recombinant virus rCVSN2C-P-mCherry20, which expresses the P protein C-terminally fused to the mCherry fluorescent protein. This recombinant virus behaves as wild type and, particularly, exhibits similar kinetics of NBs formation (Supplementary Fig. 1).
Up to 16 h p.i., NBs appear to be highly spherical structures (Figs. 1a,h) as judged by the distribution of their axial ratio which is < 1.2 in 70% of cases (Fig. 2a). This suggests that they consist of viscous liquids, similarly to the liquid-like nature of P granules24, 25, SGs21, 22 and nuclear bodies26. Consistent with this view, we frequently observed that when two or more NBs contact one another, they readily fuse and round up into a single-larger sphere (Fig. 2b, Supplementary Movies 1 and 2). Furthermore, we also sometimes observed spherical bubbles crossing the NBs (Fig. 2c, Supplementary Movie 3). The directionality of their movement suggests that they are vesicles being trafficked through the NBs. This reinforces the idea that NBs are made of a fluid phase, which can reversibly deform when encountering a physical barrier.
NBs are liquid organelles. a NBs are close to spherical. Distribution of axial ratios of NBs observed 12 h p.i. (20 cells and 100 NBs). The axial ratio (a/b) was determined by fitting the NB to an ellipse having the same second moments of area using the image toolbox of MatLab software (a, b: long and short axes of the ellipse). b NBs can fuse together. BSR cells infected by rCVSN2C-P-mCherry were imaged at the indicated time (lower left corner). The initial time corresponds to 15 h p.i (top row) and 18 h p.i. (bottom row). Images have been extracted from Supplementary Movies 1 and 2 and are shown at 10-sec intervals (top row) and 5-sec intervals (bottom row). Scale bars: 3 µm. c Spherical bubbles cross NBs. BSR cells infected by rCVSN2C-P-mCherry were imaged at the time indicated (lower left corner). The initial time corresponds to 16 h p.i. Images have been extracted from Supplementary Movie 3 and are shown at 5-s intervals. Scale bar: 3 µm. d NBs are sensitive to a hypotonic shock. BSR cells infected by rCVSN2C-P-mCherry were imaged. A hypotonic shock was applied at indicated t = 0 (corresponding to 18 h p.i.). Images are shown at 2-min intervals. Images have been extracted from Supplementary Movie 4. Scale bar: 15 µm. e–h Fluorescence recovery after photobleaching (FRAP) of P-mCherry in BSR cells at 37 °C. The diameter of the photobleached regions was 2.7 µm. e–g FRAP data were corrected for background, normalized to the minimum and maximum intensity. The mean is shown with error bars representing the SD. Experimental curves were fitted with a two-exponential model (in black). e Cytosolic P-mCherry expressed in BSR-T7/5 cells was photobleached 24 h after transfection of pTit-P-mCherry plasmid. Data were from 11 FRAP events (Supplementary Fig. 2). f Cytosolic P-mCherry expressed in BSR cells infected by rCVSN2CΔG-P-mCherry was photobleached 16 h p.i. Data were from 13 FRAP events (Supplementary Fig. 3). g P-mCherry localized in NBs in BSR cells infected by rCVSN2CΔG-P-mCherry was photobleached 16 h p.i. Data were from 12 FRAP events (Supplementary Fig. 4). h Fluorescence recovery profile along a diameter of a photobleached NB as in g
It has been reported that liquid organelles respond rapidly to cellular osmotic shock27. To test whether NBs also behave in this way, BSR cells that have been infected with rCVSN2C-P-mCherry for 16 h, were incubated with DMEM medium diluted 5 times in water. This hypotonic shock resulted in a fast and complete disappearance of NBs followed by the reformation of inclusions of intermediate size in 10–15 min (Fig. 2d, Supplementary Movie 4).
To definitively demonstrate the fluid nature of NBs, we performed fluorescence recovery after photobleaching (FRAP) experiments. For safety reason, we used a G gene deleted recombinant RABV (rCVSN2CΔG-P-mCherry), which was pseudotyped with the RABV G glycoprotein. This recombinant RABV also induces the formation of NBs, which incidentally demonstrates that G synthesis is not required for NB formation.
FRAP experiments were first performed on BSR cells transfected with a plasmid expressing P-mCherry (Fig. 2e and Supplementary Fig. 2). P-mCherry expressed alone was diffuse in the cytosol (Supplementary Fig. 2) and fluorescence recovery was fast although biphasic (Supplementary Table 1). The half time of the first phase for a photobleached region of 2.7 µm diameter took about 0.4 s corresponding to an approximate diffusion coefficient of 5 × 10−12 m2 s−1 for P-mCherry in the cytosol (Supplementary Table 1), a value which is consistent with that expected for a protein of such a molecular weight. Full fluorescence recovery was achieved after ~ 30 s. Similar data were obtained in cells infected by rCVSN2CΔG-P-mCherry, when cytosolic P (i.e., outside the NBs) was photobleached (Fig. 2f and Supplementary Fig. 3).
When P located inside NBs was photobleached (Fig. 2g and Supplementary Fig. 4), the recovery of the fluorescence signal was again biphasic but much slower (Supplementary Table 1). The half time of the first phase for a photobleached region of 2.7 µm diameter is ~5.2 s (Fig. 2g), corresponding to a diffusion coefficient of 3.9 × 10−13 m2 s−1 (Supplementary Table 1). Such a self-diffusion rate is consistent with those of other membrane-less organelles, such as nuclear speckles and nucleoli21, 28. After 2 min, the fluorescence recovery is not complete, but only reaches 78 ± 8% ( ± SD, n = 12). The recovery profile along a diameter revealed that fluorescence came back first to the periphery a few seconds after photobleaching and then displays a homogeneous distribution within the structure after ~1 min (Fig. 2h). Therefore, the FRAP data confirm that NBs have liquid properties and that P protein can shuttle between the cytosol and the inner of the NBs.
We have previously shown that SGs are formed in the cytoplasm of RABV infected cells20. As SGs have also been demonstrated to be liquid droplets21, 22, we investigated if SGs and NBs can fuse. Cells were transfected by a plasmid encoding G3BP-GFP, which is a marker of SGs. Then, 1 h post transfection, they were infected with rCVSN2C-P-mCherry. As previously described, we observed SGs in close proximity to NBs (Fig. 3, Supplementary Movie 5). However, we never observed mixing content of the two structures: the P protein remains in NBs whereas G3BP remains in SGs. Furthermore, in some circumstances, we identified small SGs located inside NBs confirming the non-miscibility of the two liquid phases (Fig. 3).
NBs and SGs are non-miscible liquid organelles. U373-MG cells were transiently transfected with pG3BP-eGFP (to visualize SGs, top row). 1 h post transfection, they were infected with the recombinant virus rCVSN2C-P-mCherry (medium row). Live-cell time-lapse experiments were performed at 16 h p.i. G3BP-GFP signals (green) and P-mCherry signals (red) are merged in the bottom row. The time post-infection is displayed in the upper left corner of each panel and the scale bars correspond to 10 μm. Images have been extracted from Supplementary Movie 5 and are shown at 5-min intervals. Note also the mobile, punctate G3BP-GFP-containing structures inside NBs
Role of the cytoskeleton on the NBs fate and RNPs transport
To investigate the role of the cytoskeleton in NB morphogenesis and dynamics, we used several drugs which interfere with the organization of the cytoskeleton. Treatment of RABV infected cells with Nocodazole (NCZ), a drug which depolymerizes the microtubules, results in the disappearance of NBs of intermediate size (Fig. 4a,c) and a significant decrease in the number of small dots corresponding to condensed RNPs (Fig. 4a,b). In general, in the presence of NCZ, a single-large NB is observed (Fig. 4a,d) which is bigger than those observed in absence of the drug. This is evidenced by the significant increase of the average surface of the largest NB present in an infected cell (Fig. 4e). Taxol, which stabilizes the microtubule network, has no effect on the structures.
Effect of cytoskeleton-disrupting drugs on formation and evolution of NBs. Nocodazole (NCZ, 2 µM), Taxol (1.25 nM) and Cytochalasin D (Cyto D, 2.5 µM) were added 1 h before and kept all along infection. BSR cells were infected with CVS strain at a MOI of 0.5 and fixed at 16 h p.i. NT: non treated cells. a Confocal analysis was performed after staining with a mouse monoclonal anti-N antibody followed by incubation with Alexa-488 donkey anti-mouse IgG. DAPI was used to stain the nuclei. Scale bars correspond to 15 µm. b–d Quantification of cytoplasmic structures labeled with anti-N antibody in treated and non-treated cells. The number of small dots (surface < 0.26 µm2) b, of intermediate inclusions (surface between 0.26 and 3.7 µm2) c, and of large inclusions (surface >3.7 µm2) d per cell was quantified using the image toolbox of MatLab software as described in the experimental procedures. **p < 0.01, ns: not significant (n = 20, two tailed Mann Whitney U test). e Average surface of the largest inclusion in the cell 16 h p.i. in non-treated and treated cells. Areas of inclusions were determined using the image toolbox of MatLab software as described in the experimental procedures. The mean is shown with error bars representing the SD. ns: not significant; **p < 2.10−3; ***p < 10−3 (n = 20, two tailed Welch's t-test)
Cell treatment by Cytochalasin D (Cyto D), which inhibits actin polymerization, results in an increased fragmentation of NBs, which is apparent when considering the average surface of the largest NB present in an infected cell (Fig. 4e).
Live cell imaging was then performed in RABV infected cells. We observed ejections of punctate structures out of NBs (Fig. 5a, Supplementary Movies 6 and 7). These structures, which probably correspond to condensed RNPs, are rapidly transported further away across the cell (Supplementary Movie 8). Ejections are also observed when cells are treated by Nocodazole (Fig. 5b, Supplementary Movie 9). However, the ejected structures are not transported and remain in the vicinity of the NBs (Fig. 5b, Supplementary Movie 9).
RNPs are ejected from NBs and transported along the microtubule network. Live cell imaging was performed on BSR cells infected by rCVSN2C-P-mCherry at a MOI of 0.5 at 16 h p.i. a Ejection of RNPs from NBs. The time is indicated in the upper right corner. Images have been extracted from Supplementary Movie 6 and are shown at 5-sec intervals. Ejection events are observed on a single image and indicated by arrowheads. The resulting RNPs are indicated by arrows. Scale bar corresponds to 3 µm. b Impact of cytoskeleton-disrupting drugs on RNPs transport in the cytosol. Nocodazole (NCZ, 2 µM), Taxol (1.25 nM) and Cytochalasin D (Cyto D, 2.5 µM) were added 1 h before and kept all along infection (NT: non-treated cells). 120 frames (one frame per 1 s, reflecting 120 s) of time-lapse microscopy (such as Supplementary Movie 7) are displayed as maximal intensity projection in order to visualize RNP trajectories which are indicated by arrows in the magnification shown in the lower row. In the NCZ-treated cell (Supplementary Movie 8), an ejection event is indicated by an arrowhead showing that RNP ejection from NBs occurs in absence of an intact microtubule network. Scale bars correspond to 15 µm. c Velocity of RNPs in the cytosol in non-treated and treated cells. The speed and the trajectories of RNPs were determined as described in the experimental procedures. Only the RNPs that were tracked on four consecutives images were taken into account. The mean is shown with error bars representing the SD. d RNPs are transported along microtubules. BSR cells were co-infected with rCVS N2C-P-mCherry and a modified baculovirus encoding human tubulin-GFP (Cell-light Tubulin-GFP). Images were deconvoluated using the Huygens Imaging software (Supplementary Fig. 5). Scale bar corresponds to 15 µm. Images have been extracted from Supplementary Movie 9 and are shown at 2.5-s intervals (bottom row)
We further characterized the transport of RNPs inside the cytoplasm (Fig. 5b). The histograms in Fig. 5c show that their velocity is the same in non-treated cells or in cells treated with Taxol or Cytochalasin D. However, in presence of Nocodazole, RNPs remain immobile in the vicinity of the NBs (Fig. 5b,c). Altogether with the fact that the measured velocities are in agreement with those of other cargoes for which transport is microtubule dependent, this indicated that RNPs transport requires the integrity of the microtubule network.
Indeed, when live imaging was performed on cells which have been transduced by a baculovirus expressing a tubulin fused with the GFP before infection by rCVSN2C-P-mCherry, we directly observed the transport of RNPs along the microtubules (Fig. 5d, Supplementary Movie 10).
A minimal system recapitulating NB properties
It has been previously demonstrated that co-expression of N and P after cell transfection also leads to the formation of cytoplasmic inclusions29. Indeed, in BSR cells constitutively expressing the T7 RNA polymerase (BSR-T7/5) and co-transfected by plasmids pTit-P and pTit-N, cytoplasmic spherical inclusions are observed (Fig. 6a).
Co-expression of N and P leads to the formation of inclusion bodies recapitulating NB properties. a BSR-T7/5 cells were co-transfected for 24 h with plasmids pTit-P and pTit-N (in equimolar concentration). N was revealed with a mouse monoclonal anti-N antibody followed by incubation with Alexa-488 donkey anti-mouse IgG and P was revealed with a rabbit polyclonal anti-P antibody followed by incubation with Alexa-568 donkey anti-rabbit IgG. DAPI was used to stain the nuclei. Scale bars correspond to 15 µm. b Fluorescence recovery after photobleaching (FRAP) of P-mCherry localized in inclusion bodies in BSR-T7/5 co-expressing P-mCherry and N. The diameter of the photobleached regions was 2.7 µm. FRAP data were corrected for background, normalized to the minimum and maximum intensity, and the mean is shown with error bars representing the SD. Experimental curves were fitted with a two-exponential model (in black). Data were from 21 FRAP events (Supplementary Fig. 6). c Domain organization of RABV P polypeptide chain. P contains an N-terminal domain which binds to N0 (PNTD:P-N0), two intrinsically disordered domains (IDD1 and IDD2), a dimerization domain (DD) and a C-terminal domain which binds to RNA-associated N protein (PCTD:P-NARN). Phosphorylation sites in position 162, 210 and 271 are indicated. d, e Identification of the P domains involved in inclusion bodies formation. BSR-T7/5 cells were co-transfected with plasmids pTit-N and the indicated construction of pTit-P. N was revealed with a mouse monoclonal anti-N antibody followed by incubation with Alexa-488 donkey anti-mouse IgG and P was revealed with a rabbit polyclonal anti-P antibody followed by incubation with Alexa-568 donkey anti-rabbit IgG. DAPI was used to stain the nuclei. Scale bars correspond to 15 µm
The dependence of inclusions formation on the stoichiometry of the transfected plasmids was investigated. Spherical inclusions were observed with ratios of pTit-N and pTit-P plasmids going from 3:1 to 1:3. In limiting concentrations of one of the plasmids (ratios 9:1 or 1:9), no inclusions were observed even when both proteins were detected in the cell (Supplementary Fig. 6).
We investigated the physical properties of the inclusions formed in this minimal system. For this, we co-transfected BSR-T7/5 cells with pTit-N and pTit-P-mCherry. We then performed FRAP experiments on pTit-P-mCherry located in the cytoplasmic inclusions that were formed. Recovery profiles (Fig. 6b and Supplementary Fig. 7), although less homogeneous, are however similar to those observed for P located in NBs in infected cells (Fig. 2g and Supplementary Table 1). This confirmed that the N-P inclusions formed in this minimal system have the same liquid characteristics as the NBs.
N-P inclusions also concentrate several cellular partners of P previously shown to be located in NBs such as HSP70 and FAK (Supplementary Fig. 8). However, unlike NBs, N-P inclusions do not eject material in the cytosol (Supplementary Movie 11, same images acquisition as Supplementary Movie 7). A possible role of RABV M protein in pinching off and ejection of RNPs was investigated. Co-tranfection of pTit-N and pTit-P-mCherry together with pTit-M-GFP (which allows expression of M-GFP) did not result in material ejection from N-P inclusions (Supplementary Movie 12, same images acquisition as Supplementary Movies 7 and 11).
We then took advantage of this minimal system to identify P domains that are essential for inclusions formation. P is a modular protein (Fig. 6c) containing an N-terminal domain (PNTD) which binds N0 (the soluble form of N protein devoid of RNA), a C-terminal domain (PCTD) which binds to the N associated with RNA, and two central intrinsically disordered domains (IDD1 and IDD2) flanking a dimerization domain (DD)30. Therefore, pTit plasmids allowing the expression of P deletion mutants were co-transfected with pTit-N and the presence of N-P inclusions was investigated. This revealed that the domains DD, IDD2 and PCTD are required for NB-like structures formation (Fig. 6d).
Domains IDD2 and PCTD are phosphorylated on residues S162, S210 and S27131 (Fig. 6c). An eventual role of phosphorylation on the phase transition was investigated. Residues S162, S210 and S271 were mutated either into alanine residues (to abolish phosphorylation) or into aspartic residues (to mimic the phosphorylated state of the serine). None of those mutations affected P ability to form NB-like structures (Supplementary Fig. 9).
Finally, we delineated more precisely the part of IDD2 domain which is required for phase separation. Deletion of residues 151–181 did not affect P ability to form NB-like structures (Fig. 6e). Therefore, only the amino-terminal part of IDD2 (residues 132–150) is required for this process. This was confirmed by the deletion of residues 139–151, which abolished spherical inclusions formation (Fig. 6e).
In this study, we provide evidence that NBs are liquid droplets formed by phase separation. First, they form spherical structures (Figs. 1a,h and 2b) with a distribution of their axial ratios (Fig. 2a) which is very similar to that observed for other liquid organelles32. Second, they fuse together to form larger spherical structures (Fig. 2b). Third, they are crossed by spherical bubbles which are most probably vesicles being trafficked through their interior, showing that NBs can reversibly deform when encountering a physical barrier (Fig. 2c). Fourth, osmotic shock, induced by a rapid change from isotonic to hypotonic conditions, causes the rapid dissolution of NBs followed by the reformation of smaller spherical inclusions (Fig. 2d), a behavior which is reminiscent of that of the liquid cellular Ddx4-organelles27. Finally, the internal order within the NBs was assessed using FRAP measurements which definitively demonstrate their liquid nature (Fig. 2e–h).
Besides those physical properties, NBs share several other features with cellular liquid organelles21, 25, 27, 33, 34. Particularly, they are composed by viral RNA, an RNA binding protein (N), and a protein containing intrinsically disordered domains (P). Indeed, P and N expressed alone were able to form structures which recapitulate the properties of NBs (Fig. 6a). It is probable that in such N-P inclusions, N is associated with cellular RNAs and forms N-RNA rings and short RNP-like structures35. This minimal system allowed us to investigate the domains of P involved in this process. P dimerization domain, its N-RNA binding domain and the amino-terminal part of its second intrinsically disordered domain (IDD2) were absolutely required for NB-like structures formation (Fig. 6d). This could suggest that P dimers bridge RNPs to ensure the liquid phase cohesion. However, the P dimer is stable36 and the affinity of P dimers for N-RNA rings (which mimick the structure of the viral RNPs) is quite high (Kd = 160 ± 20 nM)37. Such strong interactions are not compatible with a liquid behavior. Furthermore, in vitro experiments have indicated that P dimers are associated with N proteins of the same nucleocapsid37. Therefore, there must be other weak interactions either between P dimers or between nucleocapsids which remain to be characterized. We suggest that those weak interactions are mediated by the amino-terminal part of IDD2 as such intrinsically disordered domains have been involved in the formation of fuzzy complexes38, 39.
The sequence of the amino-terminal part of IDD2 (residues 132–150, Supplementary Fig. 10) is strongly biased toward polar and charged residues which is a common feature of IDDs. It is also enriched in proline residues which, however, are not conserved among the lyssavirus genus. In fact, the IDD2 region (made of 51 residues) is not conserved except the two residues Q and T in position 147 and 148. This has to be compared with a 32% conservation of P amino acid sequence in the genus (Supplementary Fig. 10). Therefore, the ability of IDD2 to induce phase separation does not rely in its amino acid sequence but rather in some global physico-chemical properties which remain to be identified.
Depending on their physicochemical properties, proteins partition preferentially either in the cytosolic phase or the NB liquid phase. Therefore, phase separation is an efficient process to enrich the viral factories in factors which are required for viral transcription or replication. FRAP experiments revealed that P, although more concentrated in NBs, is able to shuttle between the cytosol and the NBs. This may help to recruit cellular partners in the viral factories. Indeed, several identified partners of P are recruited inside NBs. This is the case for the focal adhesion kinase FAK19 and heat shock protein HSP7018. All these proteins have been demonstrated to have proviral activities18, 19 and are also associated with NB-like structures in the minimal system (Supplementary Fig. 8A).
On the other hand, phase separation may also exclude proteins with antiviral properties. Particularly, we show that SGs, which are liquid organelles containing intracellular pattern recognition receptor acting as sensors of RNA virus replication20, 40, 41, do not fuse with NBs (Fig. 3). This indicates that SGs and NBs are forming non miscible liquid phases. Immiscibility of cellular liquid phases has already been observed and underlies the formation of nucleolar subcompartments42. It is worth noting that Rhabdoviruses may have evolved different ways of dealing with SGs: VSV infection also induces the formation of SG-like structures which, however, colocalize with viral replication proteins and RNA43.
In the presence of Nocodazole, which depolymerizes microtubules, the transport of RNPs inside the cytosol is inhibited (Fig. 5b,c). In agreement with this result, live cell imaging revealed a tight association between mobile RNPs and microtubules (Fig. 5d). This explains the previously observed inhibitory effect of Nocodazole on viral production11. On the other hand, the ejection of RNPs from NBs is not affected by Nocodazole and, therefore, is not microtubule-dependent. An attractive hypothesis is that a conformational change of RNPs (e.g., their compaction) modifies their physicochemical properties and drastically decreases their solubility in the NB liquid phase, which results in their ejection. Remarkably, no material ejection from NB-like structure is observed in the minimal system. Therefore, the trigger for ejection is only present upon RABV infection.
As depicted in the model presented in Fig. 7, ejected RNPs are transported in the cytosol far away from the NBs. Then, they can either be incorporated into new virions or give rise to new viral factories. This explains the burst of NBs of intermediate size after 12 h of infection. In presence of Nocodazole (Fig. 7), ejected RNPs remain in the vicinity of the NB. They form new viral factories which fuse with the NB. This explains why, in the presence of the drug, a large single NB is generally observed (Fig. 4).
A model for the dynamics of RNPs and NBs in RABV infected cells treated (+NCZ) or not (−NCZ) by Nocodazole. 1. and 2. The initial NB is formed around an incoming RNP. 3. RNPs are ejected from NB by a process which is microtubule independent and are transported away from the initial NB along the microtubule network (−NCZ) or remain in the vicinity of the initial NB when the cells are treated by Nocodazole (+NCZ). 4. In untreated cells (–NCZ), the newly formed RNPs can give rise either to new virions upon budding at the cell membrane or to new viral factories which form NBs of intermediate size. In Nocodazole-treated cells (+NCZ), the newly formed viral factories are located in the vicinity and rapidly fuse with the initial NB which then becomes much larger
Cytochalasin D, which depolymerizes actin filaments, has little effect on the number of inclusions and free RNPs formed all along the viral cycle. However, our data indicate a decrease of the size of the largest NBs (Fig. 4). This suggests that the actin filament network limits the fragmentation of NBs.
An overview of the literature suggests that the liquid nature of viral factories might be generalized to several other negative RNA viruses (either segmented or not). This is exemplified by FRAP experiments performed on fluorescent Borna Virus phosphoprotein which is also found in spherical inclusions in the nucleus of infected cells44. This is also suggested by data obtained (i) on filoviruses which forms spherical perinuclear inclusions which are viral replication centers12, 45, (ii) on several paramyxoviruses and pneumoviruses of which N and P proteins form spherical inclusions during the infection46,47,48 and (iii) on bunyaviruses of which genomes segments are found 'aggregated' in spherical structures49. Therefore, it appears that the formation of such liquid viral factories constitutes a signature of negative RNA viral infection. It is thus highly probable that the cell has developed mechanisms sensing the presence of such structures and that viruses have in turn developed strategies to increase the furtiveness of their factories. Finally, the characterization of the physicochemical nature of those liquid viral factories will pave the way toward innovative antiviral strategies.
N2A cells (mouse neuroblastoma, ATCC reference: CCL 131), U373-MG cells (human gliobastoma astrocytoma, ATCC reference: HTB17) were purchased from the ATTC organization (http://www.lgcstandards-atcc.org). BSR cells, a clone from BHK 21 (Baby Hamster Kidney) were obtained from Dr Anne Flamand (from the former Laboratoire de Génétique des Virus, Gif, France). All these cells were grown in Dulbeco's modified eagle medium (DMEM) supplemented with 10% fetal calf serum (FCS).
BSR-T7/5 cells, a clone of BHK 21 constitutively expressing the T7 RNA polymerase50, were grown in DMEM supplemented with 10% FCS, supplemented with 2% Geneticin (Gibco-Life Technologies).
The challenge virus standard (CVS, French CVS 1151) strain of rabies virus was grown in BSR cells. rCVSN2C-P-mCherry virus encoding a P protein C-terminally fused to the fluorescent protein mCherry has been previously described20 and was grown in N2A cells.
Antibodies and drugs
The rabbit polyclonal anti-P antibody was previously described11. Mouse monoclonal anti-N antibody (81C4), was produced in a mouse immunized with purified nucleocapsids. Both were used at a 1/1000 dilution. Anti-HSP70 MAb (SPA-810) (1/200 dilution) was obtained from Stressgen. Nocodazole (M1404), Taxol (Paclitaxel T1402), Cytochalasin, D (30385), were purchased from Sigma.
Plasmids
The plasmid encoding G3BP-EGFP52 was kindly provided by Dr R Lloyd (Department of Molecular Virology and Microbiology, Baylor College of Medicine, Houston, TX 77584, USA). The plasmid pFAK-GFP encoding FAK in fusion with GFP has been described previously19.
pTit-P-mCherry plasmid encoding the P protein of CVS C-terminally fused to the mCherry fluorescent protein has been constructed using Gibson assembly kit (New England Biolabs). The P gene fused to the mCherry has been PCR amplified from the full-length genomic plasmid prN2C-P-mCherry20.
Plasmids pTit-PΔ, encoding truncated forms of P protein, have been constructed using Gibson assembly kit. PCR products encoding fragments of the P gene, with overlapping sequences, were assembled using Gibson assembly kit. The plasmids pTit-P S162A, pTit P S162D, pTit S210A, pTit P S210D and pTit P S271A, pTit P S271D were constructed by using QuikChange Site-Directed Mutagenesis Kit (Agilent Technologies).
Plasmids pTit-M, encoding the M protein of CVS and pTit-M-GFP, encoding M fused to the GFP fluorescent protein, have been constructed using Gibson assembly kit. In the latter construct, GFP was inserted between the N-terminal disordered domain and the globular C-terminal part of the M sequence (after residue 26).
In pTit plasmids53, the gene of interest is under the dependence of the T7 polymerase promoter and the corresponding transcripts contain an internal ribosomal entry site (IRES) located upstream the open reading frame.
Construction of Recombinant Virus rCVSN2CΔG-P-mCherry
The full-length recombinant rCVSN2C-P-mCherry infectious clone was described previously20. The G coding sequence was removed from the genomic plasmid. The original full-length genomic plasmid was digested with SpeI and MluI restriction enzymes. Two overlapping fragments were amplified by PCR. The first one going from the SpeI site in the P gene to the beginning of the G coding sequence and the second one going from the end of the G coding sequence to the MluI site in the L gene. The PCR products and the digested plasmid were assembled using Gibson Assembly kit (New England Biolabs) to obtain the resulting plasmid, prN2C-P-mCherry-ΔG.
Recombinant virus (rCVSN2CΔG-P-mCherry) was recovered, as described previously with modifications19, 54. Briefly, N2A cells (106 cells) were transfected using Lipofectamine 2000 (Invitrogen) with 0.85 µg of full-length prN2C-P-mCherry-ΔG, in addition to 0.4 µg pTit-N, 0.2 µg pTit-P and 0.2 µg pTit-L53, which encode respectively the N, P and L proteins of rabies virus strain SAD-L16. These plasmids were cotransfected with 0.25 µg of a plasmid encoding the T7 RNA polymerase and the plasmid pCAGGS-G55 encoding the G protein of rabies virus strain Pasteur Virus (PV). Six days post-transfection, the supernatant was passaged on fresh N2A cells transfected with pCAGGs-G, and infectious recombinant viruses were detected 3 days later by the fluorescence of the P-mCherry protein.
Treatments of cells with drugs
Cells were kept in Dulbecco modified Eagle medium containing 2 μM Nocodazole, 1.25 nM Taxol, 2.5 μM Cytochalasin D for 1 h before virus inoculation and during infection.
Immunofluorescence staining and confocal microscopy
Cells were fixed for 15 min with 4% paraformaldehyde (PFA) and permeabilized for 5 min with 0.1% TX-100 in PBS. Cells were incubated with the indicated primary antibodies for 1 h at RT, washed and incubated for 1 h with Alexa fluor conjugated secondary antibodies (Thermo Fisher Scientific). Following washing, cells were mounted with Vectashield (Vector labs) containing DAPI. Images were captured using a Leica SP8 confocal microscope (63× oil-immersion objective).
Quantification of rabies induced structures
BSR cells were infected with CVS virus at an MOI of 0.5. Cells were fixed at different time post-infection and immuno-stained with a mouse monoclonal anti-N antibody (81C4). For each cell, four planes were acquired by confocal microscopy and a max intensity Z-projection was done using ImageJ. Quantifications were performed using the image toolbox of MatLab software. Briefly, the grayscale image was converted to a binary image. The connected components (CC) of the output black and white image were then identified. For each CC, its area (i.e., the number of pixels) and its eccentricity e (i.e., that of the ellipse that has the same second-moments as the CC) were calculated using MatLab software. This allows the classification of the fluorescent structures formed in the cytoplasm based on their size, and the determination of the area of the largest NB in the cell. This also allows the calculation of the axial ratio a/b (where a and b are the long and short axes of the ellipse) of each inclusion using the formula: a/b = 1/(1−e 2)1/2.
Live cell microscopy
For live-cell imaging, BSR cells were seeded onto 35-mm μdishes (Ibidi) 24 h before infection. Cells were infected with CVS-N2C-PmCherry rabies virus in DMEM fluorobrite medium (Invitrogen) supplemented with 5% FCS. Live-cell time-lapse experiments were recovered with a Zeiss AxioObserver epifluorescence microscope (63× oil-immersion objective). Cells were maintained at 37 °C and 5% CO2 during imaging.
For live imaging of stress granules, U373-MG cells were transfected using Lipofectamine 2000 (Invitrogen) with a plasmid encoding a G3BP-GFP fusion protein prior to cell infection as previously described20.
Quantification of RNPs velocity
The speed and the trajectories of RNPs were determined manually using the "manual tracking" plugin of the ImageJ software. Only the RNPs that were tracked on four consecutives images were taken into account. The distance between two consecutive positions of a given RNP was measured, allowing the calculation of its instantaneous velocity. Instantaneous velocities along an RNP trajectory were averaged to give the RNP velocity.
Labeling of tubulin in live cells
Cell-light Tubulin-GFP (Thermo Fisher Scientific) was used to label tubulin with green fluorescent protein (GFP) in live infected cells. Cell-light Tubulin-GFP is a modified baculovirus that encodes the human tubulin gene fused to the GFP. BSR cells were co-infected with CVS-PmCherry and Cell-light Tubulin-GFP. Live-cell time-lapse experiments were recovered with a Zeiss AxioObserver epifluorescence microscope (63× oil-immersion objective). Images were deconvoluated using the Huygens Imaging software (Scientific Volume Imaging). A blind deconvolution algorithm has been used.
Infected BSR cells were fixed for 1 h at room temperature in 0.1 M cacodylate buffer (pH 7.2) containing 1% PFA, 2.5% glutaraldehyde and 1% tannic acid. Cells were then post-fixed for 1 h at 4 °C in 0.1 M cacodylate buffer (pH 7.2) containing 1% osmium and 0.8% potassium ferrycianide and stained for 1 h at 4 °C with 2% uranyl acetate in water. Cells were then dehydrated in increasing concentration of acetone and embedded by Epon with 2,4,6-tris(dimethylaminomethyl) phenol. Polymerization was carried out for 48 h at 60 °C. Ultrathin sections of Epon-embedded material were collected on copper palladium grids (200 mesh). During all the process, cells were kept as a monolayer. The cutting plane is parallel to the cell culture support. The first sections correspond to the basal plane of the cells. These sections were stained with lead citrate and uranyl acetate. Sections were examined with a MET Jeol 1400 electron microscope operated at 80 kV.
FRAP experiments were performed both on BSR cells infected with recombinant virus rCVSN2CΔG-P-mCherry at 16 h p.i. and on BSR-T7/5 cells co-transfected with pTit-N and pTit-P-mCherry at 24 h post-transfection. Data acquisitions were performed on an inverted Nikon Ti Eclipse E microscope coupled with a Spinning Disk (Yokogawa CSU-X1-A1) and cage incubator to control both temperature (37 °C) and CO2 concentration (5%). After excitation with a 561 nm laser (Cobolt Jive, 150 mW), fluorescence from mCherry was detected with a 100× oil immersion objective (Apochromat NA 1.49), a bandpass filter (607/36 Semrock), and a sCMOS camera (Hamamatsu, Orca-Flash4.0 LT). All the FRAP experiments were performed in identical conditions using iLas FRAP module (Roper Scientific): 5 s prebleach, 40 ms bleach, 120 s postbleach at a frame rate of 1 image every 500 ms. Bleaching was performed in a circular region (diameter 2.71 µm) located at the center of the field of view, with the 516 nm laser at 100%.
For the FRAP data analysis the mean intensity of every bleached region was measured and the recovery signal was normalized to the average of the prebleach signal and also corrected for the bleaching during post bleach imaging56. For this double normalization process, the background intensity was estimated by measuring a region outside the cell. To measure the bleaching of the sample during post bleach imaging, the whole cell fluorescence intensity was used for the FRAP experiments of the cytosol while a mean bleaching curve was used for the FRAP experiments of the NBs. To generate this mean bleaching curve, we realized experiments in the same conditions as for the FRAP but without the bleaching event and the mean bleach profile of 18 NBs gave a more robust correction than whole cell measurements due to the very weak numbers of NBs per cell and the strong impact of Z-axis drift of any surrounding NB in these conditions.
After this normalization, individual FRAP curves were averaged to obtain a mean curve. As a single exponential function did not give a satisfactory fit, the mean curve was fitted by a double exponential function:
$$y\left( t \right) = {y_0} + {A_{\rm fast}}\left( {1 - {e^{ - {k_{\rm fast}}t}}} \right) + {A_{\rm slow}}\left( {1 - {e^{ - {k_{\rm slow}}t}}} \right)$$
The diffusion coefficient D associated with the fast phase was calculated using the following formula27:
$$D{\rm{ = }}\frac{1}{{{t_{\frac{1}{2}}}}}{\left( {\frac{x}{{{\rm 2Erfc}{^{ - 1}}\left( {0.5} \right)}}} \right)^2}$$
where Erfc is the complementary error function, \({\rm Erfc}{^{ - 1}}\left( {0.5} \right)\ {\sim}\ 0,4769\), x is the radius of the beam and \({t_{\frac{{\rm{1}}}{2}}} = \frac{{{\rm ln}2}}{{{k_{\rm fast}}}}\).
All numerical data were calculated and plotted with mean±SD. Results were analyzed by unpaired two tailed Welch's t-test or two tailed Mann Whitney U test. The statistical significances of the differences are indicated.
The authors declare that the data supporting the findings of this study are available within the article and its Supplementary Information files, or are available from the corresponding authors upon request.
Netherton, C. L. & Wileman, T. Virus factories, double membrane vesicles and viroplasm generated in animal cells. Curr. Opin. Virol. 1, 381–387 (2011).
Novoa, R. R. et al. Virus factories: associations of cell organelles for viral replication and morphogenesis. Biol. Cell. 97, 147–172 (2005).
Chinchar, V. G., Hyatt, A., Miyazaki, T. & Williams, T. Family iridoviridae: poor viral relations no longer. Curr. Top. Microbiol. Immunol. 328, 123–170 (2009).
Risco, C. et al. Endoplasmic reticulum-Golgi intermediate compartment membranes and vimentin filaments participate in vaccinia virus assembly. J. Virol. 76, 1839–1855 (2002).
Rojo, G., Garcia-Beato, R., Vinuela, E., Salas, M. L. & Salas, J. Replication of African swine fever virus DNA in infected cells. Virology. 257, 524–536 (1999).
Schramm, B. & Locker, J. K. Cytoplasmic organization of POXvirus DNA replication. Traffic. 6, 839–846 (2005).
Avila-Perez, G., Rejas, M. T. & Rodriguez, D. Ultrastructural characterization of membranous torovirus replication factories. Cell. Microbiol. 18, 1691–1708 (2016).
Harak, C. & Lohmann, V. Ultrastructure of the replication sites of positive-strand RNA viruses. Virology. 479–480, 418–433 (2015).
Kopek, B. G., Perkins, G., Miller, D. J., Ellisman, M. H. & Ahlquist, P. Three-dimensional analysis of a viral RNA replication complex reveals a virus-induced mini-organelle. PLoS Biol. 5, e220 (2007).
Heinrich, B. S., Cureton, D. K., Rahmeh, A. A. & Whelan, S. P. Protein expression redirects vesicular stomatitis virus RNA synthesis to cytoplasmic inclusions. PLoS Pathog. 6, e1000958 (2010).
Lahaye, X. et al. Functional characterization of Negri bodies (NBs) in rabies virus-infected cells: evidence that NBs are sites of viral transcription and replication. J. Virol. 83, 7948–7958 (2009).
Hoenen, T. et al. Inclusion bodies are a site of ebolavirus replication. J. Virol. 86, 11779–11788 (2012).
Menager, P. et al. Toll-like receptor 3 (TLR3) plays a major role in the formation of rabies virus negri bodies. PLoS Pathog. 5, e1000315 (2009).
Negri, A. Contributo allo studio dell' eziologia della rabia. Bol. Soc. Med. Chir. Pavia 2, 88–114 (1903).
Albertini, A. A. et al. Crystal structure of the rabies virus nucleoprotein-RNA complex. Science 313, 360–363 (2006).
ADS CAS Article PubMed Google Scholar
Albertini, A. A., Ruigrok, R. W. & Blondel, D. Rabies virus transcription and replication. Adv. Virus. Res. 79, 1–22 (2011).
Pollin, R., Granzow, H., Kollner, B., Conzelmann, K. K. & Finke, S. Membrane and inclusion body targeting of lyssavirus matrix proteins. Cell Microbiol. 15, 200–212 (2013).
Lahaye, X., Vidy, A., Fouquet, B. & Blondel, D. Hsp70 protein positively regulates rabies virus infection. J. Virol. 86, 4743–4751 (2012).
Fouquet, B. et al. Focal adhesion kinase is involved in rabies virus infection through its interaction with viral phosphoprotein P. J. Virol. 89, 1640–1651 (2015).
Nikolic, J., Civas, A., Lama, Z., Lagaudriere-Gesbert, C. & Blondel, D. Rabies virus infection induces the formation of stress granules closely connected to the viral factories. PLoS Pathog. 12, e1005942 (2016).
Molliex, A. et al. Phase separation by low complexity domains promotes stress granule assembly and drives pathological fibrillization. Cell 163, 123–133 (2015).
Wippich, F. et al. Dual specificity kinase DYRK3 couples stress granule condensation/dissolution to mTORC1 signaling. Cell 152, 791–805 (2013).
White, J. P. & Lloyd, R. E. Regulation of stress granules in virus systems. Trends Microbiol. 20, 175–183 (2012).
Brangwynne, C. P. et al. Germline P granules are liquid droplets that localize by controlled dissolution/condensation. Science 324, 1729–1732 (2009).
Elbaum-Garfinkle, S. et al. The disordered P granule protein LAF-1 drives phase separation into droplets with tunable viscosity and dynamics. Proc. Natl Acad. Sci. USA. 112, 7189–7194 (2015).
ADS CAS Article PubMed PubMed Central Google Scholar
Brangwynne, C. P., Mitchison, T. J. & Hyman, A. A. Active liquid-like behavior of nucleoli determines their size and shape in xenopus laevis oocytes. Proc. Natl Acad. Sci. USA. 108, 4334–4339 (2011).
Nott, T. J. et al. Phase transition of a disordered nuage protein generates environmentally responsive membraneless organelles. Mol. Cell 57, 936–947 (2015).
Phair, R. D. & Misteli, T. High mobility of proteins in the mammalian cell nucleus. Nature. 404, 604–609 (2000).
Chenik, M., Chebli, K., Gaudin, Y. & Blondel, D. In vivo interaction of rabies virus phosphoprotein (P) and nucleoprotein (N): existence of two N-binding sites on P protein. J. Gen. Virol. 75, 2889–2896 (1994).
Gerard, F. C. et al. Modular organization of rabies virus phosphoprotein. J. Mol. Biol. 388, 978–996 (2009).
Gupta, A. K., Blondel, D., Choudhary, S. & Banerjee, A. K. The phosphoprotein of rabies virus is phosphorylated by a unique cellular protein kinase and specific isomers of protein kinase C. J. Virol. 74, 91–98 (2000).
Marzahn, M. R. et al. Higher-order oligomerization promotes localization of SPOP to liquid nuclear speckles. EMBO. J. 35, 1254–1275 (2016).
Nielsen, F. C., Hansen, H. T. & Christiansen, J. RNA assemblages orchestrate complex cellular processes. Bioessays 38, 674–681 (2016).
Weber, S. C. & Brangwynne, C. P. Getting RNA and protein in phase. Cell 149, 1188–1191 (2012).
Mavrakis, M. et al. Isolation and characterisation of the rabies virus N degrees-P complex produced in insect cells. Virology. 305, 406–414 (2003).
Ivanov, I., Crépin, T., Jamin, M. & Ruigrok, R. W. Structure of the dimerization domain of the rabies virus phosphoprotein. J. Virol. 84, 3707–3710 (2010).
Ribeiro Ede, A. Jr. et al. Binding of rabies virus polymerase cofactor to recombinant circular nucleoprotein-RNA complexes. J. Mol. Biol. 394, 558–575 (2009).
Cumberworth, A., Lamour, G., Babu, M. M. & Gsponer, J. Promiscuity as a functional trait: intrinsically disordered regions as central players of interactomes. Biochem. J. 454, 361–369 (2013).
Tompa, P. & Fuxreiter, M. Fuzzy complexes: polymorphism and structural disorder in protein-protein interactions. Trends Biochem. Sci. 33, 2–8 (2008).
Onomoto, K., Yoneyama, M., Fung, G., Kato, H. & Fujita, T. Antiviral innate immunity and stress granule responses. Trends Immunol. 35, 420–428 (2014).
Santiago, F. W. et al. Hijacking of RIG-I signaling proteins into virus-induced cytoplasmic structures correlates with the inhibition of type I interferon responses. J. Virol. 88, 4572–4585 (2014).
Feric, M. et al. Coexisting liquid phases underlie nucleolar subcompartments. Cell 165, 1686–1697 (2016).
Dinh, P. X. et al. Induction of stress granule-like structures in vesicular stomatitis virus-infected cells. J. Virol. 87, 372–383 (2013).
Charlier, C. M. et al. Analysis of borna disease virus trafficking in live infected cells by using a virus encoding a tetracysteine-tagged p protein. J. Virol. 87, 12339–12348 (2013).
Schudt, G., Kolesnikova, L., Dolnik, O., Sodeik, B. & Becker, S. Live-cell imaging of Marburg virus-infected cells uncovers actin-dependent transport of nucleocapsids over long distances. Proc. Natl Acad. Sci. USA 110, 14402–14407 (2013).
Derdowski, A. et al. Human metapneumovirus nucleoprotein and phosphoprotein interact and provide the minimal requirements for inclusion body formation. J. Gen. Virol. 89, 2698–2708 (2008).
Fearns, R., Young, D. F. & Randall, R. E. Evidence that the paramyxovirus simian virus 5 can establish quiescent infections by remaining inactive in cytoplasmic inclusion bodies. J. Gen. Virol. 75 3525–3539 (1994).
Zhang, S. et al. An amino acid of human parainfluenza virus type 3 nucleoprotein is critical for template function and cytoplasmic inclusion body formation. J. Virol. 87, 12457–12470 (2013).
Wichgers Schreur, P. J. & Kortekaas, J. Single-molecule FISH reveals non-selective packaging of rift valley fever virus genome segments. PLoS Pathog. 12, e1005800 (2016).
Buchholz, U. J., Finke, S. & Conzelmann, K. K. Generation of bovine respiratory syncytial virus (BRSV) from cDNA: BRSV NS2 is not essential for virus replication in tissue culture, and the human RSV leader region acts as a functional BRSV genome promoter. J. Virol. 73, 251–259 (1999).
Ugolini, G. Advances in viral transneuronal tracing. J. Neurosci. Methods 194, 2–20 (2010).
White, J. P., Cardenas, A. M., Marissen, W. E. & Lloyd, R. E. Inhibition of cytoplasmic mRNA stress granule formation by a viral proteinase. Cell Host Microbe 2, 295–305 (2007).
Finke, S. & Conzelmann, K. K. Virus promoters determine interference by defective RNAs: selective amplification of mini-RNA vectors and rescue from cDNA by a 3′ copy-back ambisense rabies virus. J. Virol. 73, 3818–3825 (1999).
Wirblich, C., Tan, G. S., Papaneri, A. & Godlewski, P. J. PPEY motif within the rabies virus (RV) matrix protein is essential for efficient virion release and RV pathogenicity. J. Virol. 82, 9730–9738 (2008).
Niwa, H., Yamamura, K. & Miyazaki, J. Efficient selection for high-expression transfectants with a novel eukaryotic vector. Gene 108, 193–199 (1991).
Phair, R. D., Gorski, S. A. & Misteli, T. Measurement of dynamic protein binding to chromatin in vivo, using photobleaching microscopy. Methods Enzymol. 375, 393–414 (2004).
This work was supported by grants from the Fondation pour la Recherche Médicale (FRM DEQ20120323711) to Y.G. This work has benefited from the core facilities of Imagerie-Gif, supported by "France‐BioImaging" (ANR‐10‐INBS‐04‐01). We also thank Abbas Abou-Hamdan for careful reading of the manuscript. We thank Sean Whelan for bringing our attention to the sphericity of Negri bodies and Vincent Rincheval for helpful discussion.
Institute for Integrative Biology of the Cell (I2BC), CEA, CNRS, Univ. Paris-Sud, Université Paris-Saclay, 91198, Gif-sur-Yvette cedex, France
Jovan Nikolic, Romain Le Bars, Zoé Lama, Nathalie Scrima, Cécile Lagaudrière-Gesbert, Yves Gaudin & Danielle Blondel
Jovan Nikolic
Romain Le Bars
Zoé Lama
Nathalie Scrima
Cécile Lagaudrière-Gesbert
Yves Gaudin
Danielle Blondel
D.B., J.N., R.L.B. and Y.G.: Conceived and designed the experiments, J.N., R.L.B., Z.L., N.S. carried out the experiments; C.L.-G., D.B., J.N., R.L.B. and Y.G. analyzed the data; D.B. and Y.G. supervised the research and wrote the manuscript. All authors contributed to editing the manuscript.
Correspondence to Yves Gaudin or Danielle Blondel.
The authors declare no competing financial interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Movie 1
Supplementary Movie 10
Peer Review File
Nikolic, J., Le Bars, R., Lama, Z. et al. Negri bodies are viral factories with properties of liquid organelles. Nat Commun 8, 58 (2017). https://doi.org/10.1038/s41467-017-00102-9
A condensate-hardening drug blocks RSV replication in vivo
Jennifer Risso-Ballester
Marie Galloux
Ralf Altmeyer
Phase separation in immune signalling
Qian Xiao
Ceara K. McAtee
Xiaolei Su
Nature Reviews Immunology (2021)
The SARS-CoV-2 nucleocapsid protein is dynamic, disordered, and phase separates with RNA
Jasmine Cubuk
Jhullian J. Alston
Alex S. Holehouse
Nature Communications (2021)
Liquid–liquid phase separation in human health and diseases
Bin Wang
Fangfang Zhou
Signal Transduction and Targeted Therapy (2021)
1H, 13C and 15N Backbone chemical shift assignments of the n-terminal and central intrinsically disordered domains of SARS-CoV-2 nucleoprotein
Serafima Guseva
Laura Mariño Perez
Martin Blackledge
Biomolecular NMR Assignments (2021)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Editorial Values Statement
Editors' Highlights
Search articles by subject, keyword or author
Show results from All journals This journal
Explore articles by subject
Nature Communications (Nat Commun) ISSN 2041-1723 (online)
nature.com sitemap
Discover content
Protocol Exchange
Nature portfolio policies
Author & Researcher services
Scientific editing
Nature Masterclasses
Nature Research Academies
Libraries & institutions
Librarian service & tools
Partnerships & Services
Nature Conferences
Nature Africa
Nature China
Nature India
Nature Italy
Nature Japan
Nature Middle East
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
|
CommonCrawl
|
Results for 'Vera Grossi'
Understanding Law and Emotion.Renata Grossi - 2015 - Emotion Review 7 (1):55-60.details
Understanding the contributions and the implications of law and emotion scholarship requires an acknowledgement of the different approaches within it. A significant part of law and emotion scholarship is focused on arguing for the relevance of emotion and on identifying emotion in legal processes and actors. Other parts of it venture further to ask how law can affect the expression and content of emotions themselves. This scholarship challenges legal positivist foundations, as well as some other established divisions in thinking, both (...) in law, and more generally in the history of ideas. The other important factor, which this article explores, is the methodology employed by law and emotion scholarship. (shrink)
Emotions in Philosophy of Mind
Situated Action: A Symbolic Interpretation.A. H. Vera & Herbert A. Simon - 1993 - Cognitive Science 17 (1):7-48.details
Embodiment and Situated Cognition in Philosophy of Cognitive Science
Organizational Structure and Responsibility: An Analysis in a Dynamic Logic of Organized Collective Agency.Davide Grossi, Lambèr Royakkers & Frank Dignum - 2007 - Artificial Intelligence and Law 15 (3):223-249.details
Aim of the present paper is to provide a formal characterization of various different notions of responsibility within groups of agents (Who did that? Who gets the blame? Who is accountable for that? etc.). To pursue this aim, the papers proposes an organic analysis of organized collective agency by tackling the issues of organizational structure, role enactment, organizational activities, task-division and task-allocation. The result consists in a semantic framework based on dynamic logic in which all these concepts can be represented (...) and in which various notions of responsibility find a formalization. The background motivation of the work consists in those responsibility-related issues which are of particular interest for the theory and development of multi-agent systems. (shrink)
Moral Responsibility in Meta-Ethics
Hardwiring: Innateness in the Age of the Brain.Giordana Grossi - 2017 - Biology and Philosophy 32 (6):1047-1082.details
"Hardwired" is a term commonly used to describe the properties of certain behaviors or brain regions. As its usage has increased exponentially in the past 50 years, both in popular media and the scholarly literature, the concept appears to have gained a cloak of respectability in scientific discourse. However, its specific meaning is difficult to pinpoint. In this paper, I examine how "hardwired" has been used in the psychological and neuroscientific literature. The analysis reveals two major themes: one centers on (...) certain purported characteristics of behaviors or brain regions, such as fixedness; the other places these and other characteristics within an evolutionary framework. Overall, the analysis reveals a degree of overlap between "hardwiring" and the folk biology concept of innateness. Various complications arise from such overlap, casting doubts on the usefulness and legitimacy of "hardwired" in scientific discourse. (shrink)
The Many Faces of Counts-As: A Formal Analysis of Constitutive Rules.Davide Grossi, John-Jules Ch Meyer & Frank Dignum - 2008 - Journal of Applied Logic 6 (2):192-217.details
Logic and Philosophy of Logic, Miscellaneous in Logic and Philosophy of Logic
Carnap's Noncognitivism About Ontology.Vera Flocke - 2020 - Noûs 54 (3):527-548.details
Gene Editing, the Mystic Threat to Human Dignity.Vera Raposo - 2019 - Journal of Bioethical Inquiry 16 (2):249-257.details
Many arguments have been made against gene editing. This paper addresses the commonly invoked argument that gene editing violates human dignity and is ultimately a subversion of human nature. There are several drawbacks to this argument. Above all, the concept of what human dignity means is unclear. It is not possible to condemn a practice that violates human dignity if we do not know exactly what is being violated. The argument's entire reasoning is thus undermined. Analyses of the arguments involved (...) in this discussion have often led to the conclusion that gene editing contravenes the principle of genetic identity thereby subverting a requisite of human dignity and ultimately threatening human nature. This paper refutes these arguments and shows that any opposition to gene editing cannot rely on the human dignity argument. (shrink)
A challenge for Super-Humeanism: the problem of immanent comparisons.Vera Matarese - 2020 - Synthese 197 (9):4001-4020.details
According to the doctrine of Super-Humeanism, the world's mosaic consists only of permanent matter points and changing spatial relations, while all the other entities and features figuring in scientific theories are nomological parameters, whose role is merely to build the best law system. In this paper, I develop an argument against Super-Humeanism by pointing out that it is vulnerable to and does not have the resources to solve the well-known problem of immanent comparisons. Firstly, I show that it cannot endorse (...) a fundamentalist solution à la Lewis, since its two pillars—a minimalist ontology and a best system account of lawhood—would generate, together, a tedious problem of internal coherence. Secondly, I consider anti-fundamentalist strategies, proposed within Humeanism, and find them inapplicable to the Super-Humean doctrine. The concern is that, since it is impossible to choose the best law system within Super-Humeanism, this doctrine may be charged with incoherence. (shrink)
Scientific Ontology. [REVIEW]Vera Flocke - 2020 - Philosophical Review 129 (1):144-149.details
Situated Action: Reply to William Clancey.Alonso H. Vera & Herbert A. Simon - 1993 - Cognitive Science 17 (1):117-133.details
The Role of Fetal Testosterone in the Development of "the Essential Difference" Between the Sexes : Some Essential Issues.Giordana Grossi & Cordelia Fine - 2012 - In Robyn Bluhm, Anne Jaap Jacobson & Heidi Lene Maibom (eds.), Neurofeminism: Issues at the Intersection of Feminist Theory and Cognitive Science. Palgrave-Macmillan.details
Cognitive Sciences, Misc in Cognitive Sciences
Feminist Philosophy of Science in Philosophy of Science, Misc
$95.87 new $99.87 used $100.49 from Amazon (collection) Amazon page
Les Relacions de Joan Lluís Vives Amb Els Anglesos I Amb l'Angleterra [Tr. By J. Palau Vera].Foster Watson & Joan Palau Vera - 1918details
Abstract Argument Games Via Modal Logic.Davide Grossi - 2013 - Synthese 190 (S1).details
Inspired by some logical considerations, the paper proposes a novel perspective on the use of two-players zero-sum games in abstract argumentation. The paper first introduces a second-order modal logic, within which all main Dung-style semantics are shown to be formalizable, and then studies the model checking game of this logic. The model checking game is then used to provide a systematic game theoretic proof procedure to test membership with respect to all those semantics formalizable in the logic. The paper discusses (...) this idea in detail and illustrates it by providing a game for the so-called skeptical preferred and skeptical semi-stable semantics. (shrink)
Logics in Logic and Philosophy of Logic
Geteilte Aufmerksamkeit.Vera King - 2018 - Psyche 72 (8):640-665.details
A Bibliography of Hao Wang.Marie Grossi, Montgomery Link, Katalin Makkai & Charles Parsons - 1998 - Philosophia Mathematica 6 (1):25-38.details
A listing is given of the published writings of the logician and philosopher Hao Wang , which includes all items known to the authors, including writings in Chinese and translations into other languages.
Philosophy of Mathematics, Misc in Philosophy of Mathematics
Lost in 'Culturation': Medical Informed Consent in China.Vera Lúcia Raposo - 2019 - Medicine, Health Care and Philosophy 22 (1):17-30.details
Although Chinese law imposes informed consent for medical treatments, the Chinese understanding of this requirement is very different from the European one, mostly due to the influence of Confucianism. Chinese doctors and relatives are primarily interested in protecting the patient, even from the truth; thus, patients are commonly uninformed of their medical conditions, often at the family's request. The family plays an important role in health care decisions, even substituting their decisions for the patient's. Accordingly, instead of personal informed consent, (...) what actually exists is 'family informed consent'. From a Western perspective, these features of Chinese law and Chinese culture might seem strange, contradicting our understanding of doctor-patient relationship and even the very essence of self-determination and fundamental rights. However, we cannot forget the huge influence of cultural factors in these domains, and that 'Western' informed consent is grounded on the individualistic nature of Western culture. This article will underline the differences between the Western and the Chinese perspectives, clarifying how each of them must be understood in its own cultural environment. But, while still respecting Chinese particularities, this paper advocates that China adopt patient individual informed consent because this is the only solution compatible with human dignity and human rights. (shrink)
Philosophie de la Nature, de Hégel. Traduite Pour la Première Fois Et Accompagnée d'Une Introd. Et d'Un Commentaire Perpétuel Par A. Véra. Paris, 1863-66. [REVIEW]Georg Wilhelm Friedrich Hegel & Augusto Véra - 1969 - Culture Et Civilisation.details
Philosophie de la Religion, de Hégel. Traduite Pour la Première Fois Et Accompagnée de Plusieurs Introductions Et d'Un Commentaire Perpétuel Par A. Véra. Paris, 1876-78. [REVIEW]Georg Wilhelm Friedrich Hegel & Augusto Véra - 1969 - Culture Et Civilisation.details
Representation of Change: Separate Electrophysiological Markers of Attention, Awareness, and Implicit Processing.Diego Fernandez-Duque, Giordana Grossi, Ian Thornton & Helen Neville - 2003 - Journal of Cognitive Neuroscience 15 (4):491-507.details
& Awareness of change within a visual scene only occurs in subjects were aware of, replicated those attentional effects, but the presence of focused attention. When two versions of a.
Change/Inattentional Blindness in Philosophy of Cognitive Science
Coherent Systems of Finite Support Iterations.Vera Fischer, Sy D. Friedman, Diego A. Mejía & Diana C. Montoya - 2018 - Journal of Symbolic Logic 83 (1):208-236.details
Norms as Ascriptions of Violations: An Analysis in Modal Logic.Davide Grossi - 2011 - Journal of Applied Logic 9 (2):95-112.details
Can China's 'Standard of Care' for COVID-19 Be Replicated in Europe?Vera Lucia Raposo - 2020 - Journal of Medical Ethics 46 (7):451-454.details
The Director-General of the WHO has suggested that China's approach to the COVID-19 crisis could be the standard of care for global epidemics. However, as remarkable as the Chinese strategy might be, it cannot be replicated in other countries and certainly not in Europe. In Europe, there is a distribution of power between the European Union and its member states. In contrast, China's political power is concentrated in the central government. This enables it to take immediate measures that affect the (...) entire country, such as massive quarantines or closing borders. Moreover, the Chinese legal framework includes restrictions on privacy and other human rights that are unknown in Europe. In addition, China has the technological power to easily impose such restrictions. In most European countries, that would be science fiction. These conditions have enabled China to combat epidemics like no other country can. However, the WHO might have been overoptimistic. The Chinese standard of care for treating COVID-19 also raises problematic issues for human rights, and the real consequences of these actions remain to be seen. (shrink)
COVID-19 in Philosophy of Science, Misc
Projective Wellorders and Mad Families with Large Continuum.Vera Fischer, Sy David Friedman & Lyubomyr Zdomskyy - 2011 - Annals of Pure and Applied Logic 162 (11):853-862.details
We show that is consistent with the existence of a -definable wellorder of the reals and a -definable ω-mad subfamily of [ω]ω.
Model Theory in Logic and Philosophy of Logic
Cardinal Characteristics and Projective Wellorders.Vera Fischer & Sy David Friedman - 2010 - Annals of Pure and Applied Logic 161 (7):916-922.details
Using countable support iterations of S-proper posets, we show that the existence of a definable wellorder of the reals is consistent with each of the following: , and.
Corporate Governance Reforms: Redefined Expectations of Audit Committee Responsibilities and Effectiveness.Sandra C. Vera-Muñoz - 2005 - Journal of Business Ethics 62 (2):115-127.details
Comprehensive regulatory changes brought on by recent corporate governance reforms have broadly redefined and re-emphasized the roles and responsibilities of all the participants in a public company's financial reporting process. Most notably, these reforms have intensified scrutiny of corporate audit committees, whose role as protectors of investors' interests now attracts substantially higher visibility and expectations. As a result, audit committees face the formidable challenge of effectively overseeing the company's financial reporting process in a dramatically changed – and highly charged – (...) corporate governance environment. This paper discusses the new expectations of audit committee responsibilities and effectiveness in the wake of corporate governance reforms, key challenges, "whistleblower" provisions and shortcomings, and provides some directions for future research. (shrink)
Homogeneous 1‐Based Structures and Interpretability in Random Structures.Vera Koponen - 2017 - Mathematical Logic Quarterly 63 (1-2):6-18.details
Development of the Perceptions of Conscience Questionnaire.Vera Dahlqvist, Sture Eriksson, Ann-Louise Glasberg, Elisabeth Lindahl, Kim Lü tzén, Gunilla Strandberg, Anna Söderberg, Venke Sørlie & Astrid Norberg - 2007 - Nursing Ethics 14 (2):181-193.details
Health care often involves ethically difficult situations that may disquiet the conscience. The purpose of this study was to develop a questionnaire for identifying various perceptions of conscience within a framework based on the literature and on explorative interviews about perceptions of conscience (Perceptions of Conscience Questionnaire). The questionnaire was tested on a sample of 444 registered nurses, enrolled nurses, nurses' assistants and physicians. The data were analysed using principal component analysis to explore possible dimensions of perceptions of conscience. The (...) results showed six dimensions, found also in theory and empirical health care studies. Conscience was perceived as authority, a warning signal, demanding sensitivity, an asset, a burden and depending on culture. We conclude that the Perceptions of Conscience Questionnaire is valid for assessing some perceptions of conscience relevant to health care providers. (shrink)
The Spectrum of Independence.Vera Fischer & Saharon Shelah - 2019 - Archive for Mathematical Logic 58 (7-8):877-884.details
We study the set of possible sizes of maximal independent families to which we refer as spectrum of independence and denote \\). Here mif abbreviates maximal independent family. We show that:1.whenever \ are finitely many regular uncountable cardinals, it is consistent that \\); 2.whenever \ has uncountable cofinality, it is consistent that \=\{\aleph _1,\kappa =\mathfrak {c}\}\). Assuming large cardinals, in addition to above, we can provide that $$\begin{aligned} \cap \hbox {Spec}=\emptyset \end{aligned}$$for each i, \.
Vice is Nice But Incest is Best: The Problem of a Moral Taboo. [REVIEW]Vera Bergelson - 2013 - Criminal Law and Philosophy 7 (1):43-59.details
Incest is a crime in most societies. In the United States, incest is punishable in almost every state with sentences going as far as 20 and 30 years in prison, and even a life sentence. Yet the reasons traditionally proffered in justification of criminalization of incest—respecting religion and universal tradition; avoiding genetic abnormalities; protecting the family unit; preventing sexual abuse and sexual imposition; and precluding immorality—at a close examination, reveal their under- and over-inclusiveness, inconsistency or outright inadequacy. It appears that (...) the true reason behind the long history of the incest laws is the feeling of repulsion and disgust this tabooed practice tends to evoke in the majority of population. However, in the absence of wrongdoing, neither a historic taboo nor the sense of repulsion and disgust legitimizes criminalization of an act. (shrink)
Family Ethics, Misc in Applied Ethics
Incest in Philosophy of Gender, Race, and Sexuality
Less is More? Effects of Exhaustive Vs. Minimal Emotion Labelling on Emotion Regulation Strategy Planning.Vera Vine, Emily E. Bernstein & Susan Nolen-Hoeksema - 2019 - Cognition and Emotion 33 (4):855-862.details
ABSTRACTPrevious research suggests that labelling emotions, or describing affective states using emotion words, facilitates emotion regulation. But how much labelling promotes emotion regulation? And which emotion regulation strategies does emotion labelling promote? Drawing on cognitive theories of emotion, we predicted that labelling emotions using fewer words would be less confusing and would facilitate forms of emotion regulation requiring more cognitively demanding processing of context. Participants mentally immersed themselves in an emotional vignette, were randomly assigned to an exhaustive or minimal emotion (...) labelling manipulation, and then completed an emotion regulation strategy planning task. Minimal emotion labelling promoted higher subjective emotional clarity. Furthermore, in terms of specific emotion regulation strategies, minimal emotion labelling prompted more plans for problem solving and marginally more plans for reappraisal, but did not affect plans for behav... (shrink)
Supervenience of Extrinsic Properties.Vera Hoffmann & Albert Newen - 2007 - Erkenntnis 67 (2):305-319.details
The aim of this paper is to define a notion of supervenience which can adequately describe the systematic dependence of extrinsic as well as of intrinsic higher-level properties on base-level features. We argue that none of the standard notions of supervenience—the concepts of weak, strong and global supervenience—fulfil this function. The concept of regional supervenience, which is purported to improve on the standard conceptions, turns out to be problematic as well. As a new approach, we develop the notion of property-dependent (...) supervenience. This notion is founded on a criterion of relevance adapting the supervenience base to the considered higher-level properties in a specific way, such that only features which are relevant to the instantiation of the higher-level properties under consideration are taken into account. (shrink)
Supervenience, General in Metaphysics
A Bibliography of Hao Wang.Marie Grossi, Montgomery Link, Katalin Makkai & And Charles Parsons - 1998 - Philosophia Mathematica 6 (1):25-38.details
A listing is given of the published writings of the logician and philosopher Hao Wang (1921—1995), which includes all items known to the authors, including writings in Chinese and translations into other languages.
Chinese Neo-Confucianism in Asian Philosophy
'Situational Analysis' and Economics: An Attempt at Clarification.Alfonso Palacio-Vera - 2019 - Economics and Philosophy 35 (3):479-498.details
Popper's 'Situational Analysis' constitutes his methodological proposal for the social sciences. We claim that the two hallmarks of SA are: that scientists assume they possess a 'wider' view of the problem-situation than actors do, and use the model as an ideal 'benchmark' scenario to identify the deviation of actors' actual behaviour from the former. We argue that SA is not a generalization of the neoclassical theory of individual behaviour but captures instead the methodology adopted by modern behavioural economists. Last, we (...) argue that SA highlights a way of acquiring knowledge that has gone unnoticed in the literature. (shrink)
Philosophy of Economics in Philosophy of Social Science
Children in Clinical Research: A Conflict of Moral Values.Vera Hassner Sharav - 2003 - American Journal of Bioethics 3 (1):12 – 59.details
This paper examines the culture, the dynamics and the financial underpinnings that determine how medical research is being conducted on children in the United States. Children have increasingly become the subject of experiments that offer them no potential direct benefit but expose them to risks of harm and pain. A wide range of such experiments will be examined, including a lethal heartburn drug test, the experimental insertion of a pacemaker, an invasive insulin infusion experiment, and a fenfluramine "violence prediction" experiment. (...) Emphasis, however, is given to psychoactive drug tests because of the inherent ethical and diagnostic problems involved in the absence of any objective, verifiable diagnostic tool. Effort is made to provide readers comprehensive reference sources to evidence-based reports about the serious risks these drugs pose for adults and children so that the reader may judge whether the benefits (if any) outweigh the risks for children. The first ethical issue raised by these experiments is: did the severity of illness in these children justify their exposure to short and long-term risks? The ethics of the experiments will be evaluated by referring to existing codes of medical research ethics-the Nuremberg Code, the Declaration of Helsinki, and the federal Code of Regulations. The thirteen cases presented will demonstrate that children are being used in ever more speculative experiments, often in the absence of a therapeutic intent, but a significant chance for causing harm and / or discomfort. Some of the experiments were designed to explore the mechanisms of pathology and pharmacological interventions, or the response of neurological brain receptors to chemical provocation ("challenge"). Others were designed to test the safety or efficacy of new drugs, even to test these drugs on healthy children who were hypothesized to be "at risk." Children and adolescents have been subjected to "forced dose titration" experiments that induced a spectrum of severe adverse effects, including insomnia, extreme restlessness, agitation, (akathisia) and self-injurious behavior. FDA reports show that suicide is a significant issue in psychotropic drug trials-in pediatric trials the problem is even greater. The paper aims to demonstrate how the enactment of the Better Pharmaceuticals for Children Act (incorporated into the Food and Drug Administration Modernization Act, FDAMA), set in motion a radical shift in public policy by providing huge financial incentives to pharmaceutical companies to test new or patented drugs in children. Federal policy shifted from one aimed at protecting children by setting limits on permissible research risks, to a policy aimed at broadening the inclusion of children as test subjects. It will be shown how the FDA and the Department of Health and Human Services lifted regulatory restrictions to permit research involving greater than minimal risk to be conducted on healthy children, claiming that all children are potentially "at risk" of a future condition. Children were in this way deprived of regulatory protections. An argument will be made that the approval of nontherapeutic, harmful experiments-such as exposure of toddlers to lead poison-under the current gate keeping system raises serious doubts about the sustainability of institutional review boards (IRBs) as protectors of human research subjects. Children, who are precluded from exercising the adult human's right to informed consent, are being exploited as commodities for commercial ends. It is the position of the author that nothing less than the enactment of a federal law mandating a radical overhaul of the current research review system, with independent checks and balances, will provide children the legal protections they need. Ten specific recommendations are offered to protect children from harmful experiments. (shrink)
Cardinal Characteristics, Projective Wellorders and Large Continuum.Vera Fischer, Sy David Friedman & Lyubomyr Zdomskyy - 2013 - Annals of Pure and Applied Logic 164 (7-8):763-770.details
We extend the work of Fischer et al. [6] by presenting a method for controlling cardinal characteristics in the presence of a projective wellorder and 2ℵ0>ℵ2. This also answers a question of Harrington [9] by showing that the existence of a Δ31 wellorder of the reals is consistent with Martin's axiom and 2ℵ0=ℵ3.
Axioms of Set Theory in Philosophy of Mathematics
Asymptotic Probabilities of Extension Properties and Random L -Colourable Structures.Vera Koponen - 2012 - Annals of Pure and Applied Logic 163 (4):391-438.details
Contested Remembrance: The Hiroshima Exhibit Controversy.Vera L. Zolberg - 1998 - Theory and Society 27 (4):565-590.details
Philosophy of Social Science, General Works in Philosophy of Social Science
Gender Diversity in the Boardroom and Firm Financial Performance.Kevin Campbell & Antonio Mínguez-Vera - 2008 - Journal of Business Ethics 83 (3):435-451.details
The monitoring role performed by the board of directors is an important corporate governance control mechanism, especially in countries where external mechanisms are less well developed. The gender composition of the board can affect the quality of this monitoring role and thus the financial performance of the firm. This is part of the "business case" for female participation on boards, though arguments may also be framed in terms of ethical considerations. While the issue of board gender diversity has attracted growing (...) research interest in recent years, most empirical results are based on U.S. data. This article adds to a growing number of non-U.S. studies by investigating the link between the gender diversity of the board and firm financial performance in Spain, a country which historically has had minimal female participation in the workforce, but which has now introduced legislation to improve equality of opportunities. We investigate the topic using panel data analysis and find that gender diversity – as measured by the percentage of women on the board and by the Blau and Shannon indices – has a positive effect on firm value and that the opposite causal relationship is not significant. Our study suggests that investors in Spain do not penalise firms which increase their female board membership and that greater gender diversity may generate economic gains. (shrink)
Experimental Philosophy: Ethics in Metaphilosophy
Financial Ethics in Applied Ethics
Philosophy of Gender in Philosophy of Gender, Race, and Sexuality
Maximal Cofinitary Groups Revisited.Vera Fischer - 2015 - Mathematical Logic Quarterly 61 (4-5):367-379.details
On Constraints and Dividing in Ternary Homogeneous Structures.Vera Koponen - 2018 - Journal of Symbolic Logic 83 (4):1691-1721.details
Cognitive Underpinnings of Moral Reasoning in Adolescence: The Contribution of Executive Functions.E. Vera-Estay, J. J. Dooley & M. H. Beauchamp - 2015 - Journal of Moral Education 44 (1):17-33.details
Adolescence is a developmental period characterized by intense changes, which impact the interaction between individuals and their environments. Moral reasoning is an important skill during adolescence because it guides social decisions between right and wrong. Identifying the cognitive underpinnings of MR is essential to understanding the development of this function. The aim of this study was to explore predictors of MR in typically developing adolescents and the specific contribution of higher order cognitive processing using an innovative visual MR assessment tool (...) and measures of executive functioning and intelligence. MR maturity was correlated with four executive functions and was predicted by four variables: age, intelligence, nonverbal flexibility and verbal fluency. Overall, these results contribute to a better understanding of MR during adolescence and highlight the importance of using innovative tools to measure social cognition. (shrink)
Cichoń's Diagram, Regularity Properties and $${\Varvec{\Delta}^1_3}$$ Δ 3 1 Sets of Reals.Vera Fischer, Sy David Friedman & Yurii Khomskii - 2014 - Archive for Mathematical Logic 53 (5-6):695-729.details
We study regularity properties related to Cohen, random, Laver, Miller and Sacks forcing, for sets of real numbers on the Δ31\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{\Delta}^1_3}$$\end{document} level of the projective hieararchy. For Δ21\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{\Delta}^1_2}$$\end{document} and Σ21\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{\Sigma}^1_2}$$\end{document} sets, the relationships between these properties follows the pattern of the well-known Cichoń diagram for cardinal characteristics of the continuum. It is known that (...) assuming suitable large cardinals, the same relationships lift to higher projective levels, but the questions become more challenging without such assumptions. Consequently, all our results are proved on the basis of ZFC alone or ZFC with an inaccessible cardinal. We also prove partial results concerning Σ31\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{\Sigma}^1_3}$$\end{document} and Δ41\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\varvec{\Delta}^1_4}$$\end{document} sets. (shrink)
Areas of Mathematics in Philosophy of Mathematics
Priority Structures in Deontic Logic.Johan Benthem, Davide Grossi & Fenrong Liu - 2014 - Theoria 80 (2):116-152.details
This article proposes a systematic application of recent developments in the logic of preference to a number of topics in deontic logic. The key junction is the well-known Hansson conditional for dyadic obligations. These conditionals are generalized by pairing them with reasoning about syntactic priority structures. The resulting two-level approach to obligations is tested first against standard scenarios of contrary-to-duty obligations, leading also to a generalization for the Kanger-Anderson reduction of deontic logic. Next, the priority framework is applied to model (...) two intuitively different sorts of deontic dynamics of obligations, based on information changes and on genuine normative events. In this two-level setting, we also offer novel takes on vexed issues such as the Chisholm paradox and modelling strong permission. Finally, the priority framework is shown to provide a unifying setting for the study of operations on norms as such, in particular, adding or deleting individual norms, and even merging whole norm systems in different manners. (shrink)
Deontic Logic in Logic and Philosophy of Logic
Linking Adult Second Language Learning and Diachronic Change: A Cautionary Note.Vera Kempe & Patricia J. Brooks - 2018 - Frontiers in Psychology 9.details
Are Wrongful Life Actions Threatening the Value of Human Life?Vera Raposo - 2017 - Journal of Bioethical Inquiry 14 (3):339-345.details
Most courts around the world have been refusing wrongful life actions. The main argument invoked is that the supposed compensable injury cannot be classified as such, since life is always a blessing no matter how hard and painful it is.In opposition to mainstream scholars and the dominant case law, this article sustains that life must be distinguished from living conditions, the former being the real injury at stake, since some living conditions are so intolerable that in themselves they justify a (...) compensation within wrongful life actions. (shrink)
Editorial Digitalisierung – Folgen für Kultur und Psyche.Vera King & Benigna Gerisch - 2019 - Psyche 73 (9):633-643.details
Intuition, Gender and the Under-Representation of Women in Philosophy.Vera Tripodi - 2015 - Rivista di Estetica 58:136-146.details
Binary Simple Homogeneous Structures.Vera Koponen - 2018 - Annals of Pure and Applied Logic 169 (12):1335-1368.details
Reply to Touretzky and Pomerleau: Reconstructing Physical Symbol Systems.Alonso H. Vera & Herbert A. Simon - 1994 - Cognitive Science 18 (2):355-360.details
Does Fault Matter?Vera Bergelson - 2018 - Criminal Law and Philosophy 12 (3):375-392.details
In this article, I try to go beyond the traditional objections to strict liability public welfare offenses and confront other possible justifications for punishing non-culpable conduct. Specifically, I consider the following arguments:Penalties for public welfare offenses are punishment by name only, thus traditional justifications for punishment are not needed;Even if those penalties are punishment, punishing those who produce or threaten significant harm to others is not necessarily unjust; andEven if such punishment is not entirely just, it is consistent with other (...) widely accepted criminal law doctrines. (shrink)
|
CommonCrawl
|
# Types of differential equations
Differential equations are equations that involve derivatives. They are used to model a wide range of phenomena in various fields, including physics, engineering, and biology. There are several types of differential equations, each with its own characteristics and solution methods.
1. Ordinary Differential Equations (ODEs)
- ODEs involve derivatives with respect to a single independent variable.
- They are used to model systems that change over time.
- Example: Newton's second law of motion, which relates the acceleration of an object to the forces acting on it.
2. Partial Differential Equations (PDEs)
- PDEs involve derivatives with respect to multiple independent variables.
- They are used to model systems that vary in space and time.
- Example: The heat equation, which describes how temperature changes over time and space.
3. Linear Differential Equations
- Linear differential equations have the form $a_n(x)y^{(n)}(x) + a_{n-1}(x)y^{(n-1)}(x) + ... + a_1(x)y'(x) + a_0(x)y(x) = f(x)$.
- They can be solved using techniques such as separation of variables and the method of undetermined coefficients.
4. Nonlinear Differential Equations
- Nonlinear differential equations do not have a linear relationship between the dependent variable and its derivatives.
- They often require numerical methods or approximation techniques to find solutions.
5. First-Order Differential Equations
- First-order differential equations involve only the first derivative of the dependent variable.
- They can be solved using techniques such as separation of variables and integrating factors.
6. Second-Order Differential Equations
- Second-order differential equations involve the second derivative of the dependent variable.
- They can be solved using techniques such as the method of undetermined coefficients and variation of parameters.
7. Homogeneous Differential Equations
- Homogeneous differential equations have the form $a_n(x)y^{(n)}(x) + a_{n-1}(x)y^{(n-1)}(x) + ... + a_1(x)y'(x) + a_0(x)y(x) = 0$.
- They can be solved using techniques such as substitution and separation of variables.
8. Inhomogeneous Differential Equations
- Inhomogeneous differential equations have the form $a_n(x)y^{(n)}(x) + a_{n-1}(x)y^{(n-1)}(x) + ... + a_1(x)y'(x) + a_0(x)y(x) = f(x)$.
- They can be solved using techniques such as variation of parameters and the method of undetermined coefficients.
9. Autonomous Differential Equations
- Autonomous differential equations do not depend explicitly on the independent variable.
- They can be solved using techniques such as separation of variables and phase plane analysis.
10. Stiff Differential Equations
- Stiff differential equations are characterized by having solutions that vary rapidly in some regions and slowly in others.
- They require specialized numerical methods to solve accurately.
11. Boundary Value Problems
- Boundary value problems involve finding the solution to a differential equation subject to specified boundary conditions.
- They can be solved using techniques such as shooting methods and finite difference methods.
## Exercise
Which type of differential equation would be appropriate to model the spread of a disease in a population over time?
### Solution
A system of ordinary differential equations would be appropriate to model the spread of a disease in a population over time. This is because the variables (such as the number of infected individuals) change over time, but there is only one independent variable (time).
# Solving differential equations analytically
Analytical methods for solving differential equations involve finding an exact solution that satisfies the equation. These methods often rely on mathematical techniques such as integration, differentiation, and algebraic manipulation.
1. Separation of Variables
- This method is used for first-order ordinary differential equations.
- The equation is rearranged so that all terms involving the dependent variable are on one side and all terms involving the independent variable are on the other side.
- The resulting equation can be integrated to find the solution.
2. Integrating Factors
- This method is used for first-order linear ordinary differential equations.
- The equation is multiplied by an integrating factor that makes the equation exact.
- The resulting equation can be integrated to find the solution.
3. Method of Undetermined Coefficients
- This method is used for linear ordinary differential equations with constant coefficients.
- The solution is assumed to have a specific form, and the coefficients are determined by substituting the assumed solution into the differential equation.
4. Variation of Parameters
- This method is used for linear ordinary differential equations with non-constant coefficients.
- The solution is assumed to have the form of a linear combination of known solutions, and the coefficients are determined by substituting the assumed solution into the differential equation.
5. Laplace Transform
- This method is used for solving ordinary differential equations with constant coefficients.
- The differential equation is transformed into an algebraic equation by applying the Laplace transform.
- The algebraic equation can be solved to find the Laplace transform of the solution, which can then be inverted to find the solution.
6. Fourier Series
- This method is used for solving partial differential equations with periodic boundary conditions.
- The solution is represented as a Fourier series, which is a sum of sine and cosine functions.
- The coefficients of the Fourier series are determined by substituting the series into the partial differential equation.
7. Separation of Variables (Partial Differential Equations)
- This method is used for solving partial differential equations with separable variables.
- The equation is rearranged so that each term depends on only one independent variable.
- The resulting equations can be solved separately, and the solutions can be combined to find the solution to the original equation.
8. Method of Characteristics
- This method is used for solving first-order partial differential equations.
- The equation is transformed into a system of ordinary differential equations along characteristic curves.
- The system of ordinary differential equations can be solved to find the solution to the partial differential equation.
9. Green's Functions
- This method is used for solving linear partial differential equations with inhomogeneous boundary conditions.
- The equation is transformed into an integral equation using a Green's function.
- The integral equation can be solved to find the solution to the partial differential equation.
10. Finite Difference Methods
- This method is used for solving differential equations numerically.
- The differential equation is approximated by a system of algebraic equations using finite difference formulas.
- The system of algebraic equations can be solved to find an approximate solution to the differential equation.
## Exercise
Which method would be appropriate to solve the following differential equation?
$\frac{dy}{dx} + 2y = 3x$
### Solution
The method of integrating factors would be appropriate to solve the differential equation $\frac{dy}{dx} + 2y = 3x$. This is because it is a first-order linear ordinary differential equation.
# Introduction to numerical methods
Numerical methods are used to approximate solutions to mathematical problems that cannot be solved analytically. These methods involve using algorithms and computer programs to perform calculations and obtain numerical results.
In the context of differential equations, numerical methods are used to approximate the solutions of differential equations that cannot be solved analytically. These methods involve discretizing the domain of the differential equation and approximating the derivatives using finite difference formulas.
There are several advantages to using numerical methods for solving differential equations:
- Numerical methods can handle complex and nonlinear differential equations that do not have analytical solutions.
- Numerical methods can provide approximate solutions with a desired level of accuracy.
- Numerical methods can handle problems with irregular or non-uniform domains.
- Numerical methods can handle problems with boundary conditions that are difficult to satisfy analytically.
There are different types of numerical methods that can be used to solve differential equations, including:
1. Euler's method
2. Higher-order numerical methods
3. Numerical integration methods
4. Matrix operations and their applications
5. Introduction to root finding
6. Newton's method
7. Scipy and its applications
In the following sections, we will explore these numerical methods in more detail and learn how to implement them using the Scipy library in Python.
# Euler's method
Euler's method is a simple and widely used numerical method for solving ordinary differential equations. It is based on the idea of approximating the solution of a differential equation by taking small steps and using the derivative of the function at each step.
The basic idea behind Euler's method is to start with an initial value and then use the derivative of the function to determine the slope of the tangent line at that point. This slope is then used to estimate the value of the function at a slightly later point. By repeating this process, we can approximate the solution of the differential equation over a given interval.
The formula for Euler's method is as follows:
$$y_{n+1} = y_n + h \cdot f(t_n, y_n)$$
where:
- $y_n$ is the approximation of the solution at the nth step
- $h$ is the step size
- $f(t_n, y_n)$ is the derivative of the function at the nth step
To use Euler's method, we need to specify an initial value, a step size, and the derivative of the function. We can then iterate through the steps, updating the value of $y_n$ using the formula above.
Euler's method is a first-order method, which means that the error in the approximation is proportional to the step size. Therefore, smaller step sizes generally lead to more accurate results. However, using smaller step sizes also increases the computational cost.
Let's consider the following example:
$$\frac{dy}{dt} = -2y$$
with the initial condition $y(0) = 1$.
We can use Euler's method to approximate the solution of this differential equation over the interval $0 \leq t \leq 1$ with a step size of $h = 0.1$.
First, we need to calculate the derivative of the function:
$$f(t, y) = -2y$$
Then, we can iterate through the steps using the formula for Euler's method:
$$y_{n+1} = y_n + h \cdot f(t_n, y_n)$$
For the initial step, we have $y_0 = 1$ and $t_0 = 0$. Using the formula, we can calculate $y_1$ as follows:
$$y_1 = y_0 + h \cdot f(t_0, y_0)$$
$$y_1 = 1 + 0.1 \cdot (-2 \cdot 1)$$
$$y_1 = 1 - 0.2$$
$$y_1 = 0.8$$
We can continue this process for the remaining steps until we reach the desired interval.
## Exercise
Use Euler's method to approximate the solution of the following differential equation over the interval $0 \leq t \leq 1$ with a step size of $h = 0.1$:
$$\frac{dy}{dt} = -3y$$
with the initial condition $y(0) = 2$.
Calculate the values of $y_1$, $y_2$, $y_3$, $y_4$, $y_5$, $y_6$, $y_7$, $y_8$, $y_9$, and $y_{10}$.
### Solution
$$y_1 = 1.4$$
$$y_2 = 0.98$$
$$y_3 = 0.686$$
$$y_4 = 0.4802$$
$$y_5 = 0.33614$$
$$y_6 = 0.235298$$
$$y_7 = 0.1647086$$
$$y_8 = 0.11529602$$
$$y_9 = 0.080707214$$
$$y_{10} = 0.0564940498$$
# Higher-order numerical methods
While Euler's method is a simple and widely used numerical method for solving ordinary differential equations, it is a first-order method and has limitations in terms of accuracy. Higher-order numerical methods can provide more accurate approximations of the solution.
One such method is the Runge-Kutta method, which is a family of numerical methods that use multiple evaluations of the derivative at each step. The most commonly used variant is the fourth-order Runge-Kutta method, also known as RK4.
The formula for the RK4 method is as follows:
$$y_{n+1} = y_n + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4)$$
where:
- $y_n$ is the approximation of the solution at the nth step
- $k_1 = h \cdot f(t_n, y_n)$
- $k_2 = h \cdot f(t_n + \frac{h}{2}, y_n + \frac{k_1}{2})$
- $k_3 = h \cdot f(t_n + \frac{h}{2}, y_n + \frac{k_2}{2})$
- $k_4 = h \cdot f(t_n + h, y_n + k_3)$
To use the RK4 method, we need to specify an initial value, a step size, and the derivative of the function. We can then iterate through the steps, updating the value of $y_n$ using the formula above.
The RK4 method is a fourth-order method, which means that the error in the approximation is proportional to the step size raised to the fourth power. Therefore, smaller step sizes generally lead to more accurate results. However, using smaller step sizes also increases the computational cost.
Let's consider the same example as before:
$$\frac{dy}{dt} = -2y$$
with the initial condition $y(0) = 1$.
We can use the RK4 method to approximate the solution of this differential equation over the interval $0 \leq t \leq 1$ with a step size of $h = 0.1$.
First, we need to calculate the derivative of the function:
$$f(t, y) = -2y$$
Then, we can iterate through the steps using the formula for the RK4 method:
$$k_1 = h \cdot f(t_n, y_n)$$
$$k_2 = h \cdot f(t_n + \frac{h}{2}, y_n + \frac{k_1}{2})$$
$$k_3 = h \cdot f(t_n + \frac{h}{2}, y_n + \frac{k_2}{2})$$
$$k_4 = h \cdot f(t_n + h, y_n + k_3)$$
$$y_{n+1} = y_n + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4)$$
For the initial step, we have $y_0 = 1$ and $t_0 = 0$. Using the formula, we can calculate $y_1$ as follows:
$$k_1 = 0.1 \cdot (-2 \cdot 1)$$
$$k_2 = 0.1 \cdot (-2 \cdot (1 + \frac{k_1}{2}))$$
$$k_3 = 0.1 \cdot (-2 \cdot (1 + \frac{k_2}{2}))$$
$$k_4 = 0.1 \cdot (-2 \cdot (1 + k_3))$$
$$y_1 = 1 + \frac{1}{6}(k_1 + 2k_2 + 2k_3 + k_4)$$
We can continue this process for the remaining steps until we reach the desired interval.
## Exercise
Use the RK4 method to approximate the solution of the following differential equation over the interval $0 \leq t \leq 1$ with a step size of $h = 0.1$:
$$\frac{dy}{dt} = -3y$$
with the initial condition $y(0) = 2$.
Calculate the values of $y_1$, $y_2$, $y_3$, $y_4$, $y_5$, $y_6$, $y_7$, $y_8$, $y_9$, and $y_{10}$.
### Solution
$$y_1 = 1.4$$
$$y_2 = 0.98$$
$$y_3 = 0.686$$
$$y_4 = 0.4802$$
$$y_5 = 0.33614$$
$$y_6 = 0.235298$$
$$y_7 = 0.1647086$$
$$y_8 = 0.11529602$$
$$y_9 = 0.080707214$$
$$y_{10} = 0.0564940498$$
# Introduction to integration
Integration is a fundamental concept in calculus that involves finding the area under a curve. It has many applications in science, engineering, and mathematics.
There are two main types of integration: definite and indefinite. Definite integration involves finding the exact value of the area under a curve between two specified points. Indefinite integration involves finding a function whose derivative is equal to a given function.
In this section, we will focus on definite integration. We will learn how to calculate the definite integral of a function using both analytical and numerical methods.
Analytical methods involve finding an antiderivative of the function and evaluating it at the upper and lower limits of integration. The antiderivative of a function is a function whose derivative is equal to the original function.
For example, the definite integral of the function $f(x) = x^2$ from $x = 0$ to $x = 1$ can be calculated analytically as follows:
$$\int_0^1 x^2 dx = \left[\frac{1}{3}x^3\right]_0^1 = \frac{1}{3}(1^3 - 0^3) = \frac{1}{3}$$
Numerical methods, on the other hand, involve approximating the definite integral using numerical techniques. These techniques divide the interval of integration into smaller subintervals and approximate the area under the curve using these subintervals.
One commonly used numerical method is the trapezoidal rule. The trapezoidal rule approximates the area under the curve as the sum of the areas of trapezoids formed by adjacent points on the curve.
Let's consider the following example:
$$\int_0^5 \sin(x) \cos(x^2) + 1 dx$$
We can use the trapezoidal rule to approximate the value of this definite integral. First, we need to divide the interval $0 \leq x \leq 5$ into smaller subintervals. Let's use a step size of $h = 0.1$, which gives us 51 subintervals.
Next, we evaluate the function at each point and calculate the area of the trapezoid formed by adjacent points. Finally, we sum up the areas of all the trapezoids to get the approximate value of the definite integral.
## Exercise
Use the trapezoidal rule to approximate the value of the definite integral:
$$\int_0^5 \sin(x) \cos(x^2) + 1 dx$$
Divide the interval $0 \leq x \leq 5$ into 51 subintervals and use a step size of $h = 0.1$.
Calculate the approximate value of the definite integral.
### Solution
Approximate value of the definite integral: 5.04201628314
# Numerical integration methods
In addition to the trapezoidal rule, there are several other numerical integration methods that can be used to approximate the value of a definite integral. These methods vary in terms of accuracy and computational cost.
One such method is Simpson's rule, which approximates the area under the curve as a series of parabolic arcs. Simpson's rule is more accurate than the trapezoidal rule and requires fewer subintervals to achieve a given level of accuracy.
Another method is Gaussian quadrature, which uses a weighted sum of function values at specific points within the interval of integration. Gaussian quadrature is even more accurate than Simpson's rule but requires more function evaluations.
The choice of numerical integration method depends on the desired level of accuracy and the computational resources available. In general, higher-order methods provide more accurate results but require more computational effort.
Let's consider the same example as before:
$$\int_0^5 \sin(x) \cos(x^2) + 1 dx$$
We can use Simpson's rule to approximate the value of this definite integral. First, we need to divide the interval $0 \leq x \leq 5$ into smaller subintervals. Let's use a step size of $h = 0.1$, which gives us 51 subintervals.
Next, we evaluate the function at each point and calculate the area of the parabolic arc formed by adjacent points. Finally, we sum up the areas of all the parabolic arcs to get the approximate value of the definite integral.
## Exercise
Use Simpson's rule to approximate the value of the definite integral:
$$\int_0^5 \sin(x) \cos(x^2) + 1 dx$$
Divide the interval $0 \leq x \leq 5$ into 51 subintervals and use a step size of $h = 0.1$.
Calculate the approximate value of the definite integral.
### Solution
Approximate value of the definite integral: 5.10034506754
# Matrix operations and their applications
Matrices are a fundamental concept in linear algebra and have many applications in scientific computing. They are used to represent and manipulate systems of linear equations, perform transformations in computer graphics, analyze networks in social sciences, and much more.
In this section, we will learn about basic matrix operations, such as addition, subtraction, multiplication, and inversion. We will also explore some applications of matrices in solving systems of linear equations and performing transformations.
Matrix addition and subtraction are performed by adding or subtracting corresponding elements of the matrices. The matrices must have the same dimensions for addition or subtraction to be possible.
Matrix multiplication is a bit more involved. The product of two matrices is obtained by taking the dot product of each row of the first matrix with each column of the second matrix. The resulting matrix has dimensions equal to the number of rows of the first matrix and the number of columns of the second matrix.
Matrix inversion is the process of finding the inverse of a matrix, which, when multiplied by the original matrix, gives the identity matrix. Not all matrices have inverses, and those that do are called invertible or non-singular matrices.
Let's consider the following example:
$$A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$$
$$B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix}$$
We can perform matrix addition, subtraction, multiplication, and inversion on these matrices.
Matrix addition:
$$A + B = \begin{bmatrix} 1 + 5 & 2 + 6 \\ 3 + 7 & 4 + 8 \end{bmatrix} = \begin{bmatrix} 6 & 8 \\ 10 & 12 \end{bmatrix}$$
Matrix subtraction:
$$A - B = \begin{bmatrix} 1 - 5 & 2 - 6 \\ 3 - 7 & 4 - 8 \end{bmatrix} = \begin{bmatrix} -4 & -4 \\ -4 & -4 \end{bmatrix}$$
Matrix multiplication:
$$A \cdot B = \begin{bmatrix} 1 \cdot 5 + 2 \cdot 7 & 1 \cdot 6 + 2 \cdot 8 \\ 3 \cdot 5 + 4 \cdot 7 & 3 \cdot 6 + 4 \cdot 8 \end{bmatrix} = \begin{bmatrix} 19 & 22 \\ 43 & 50 \end{bmatrix}$$
Matrix inversion:
$$A^{-1} = \begin{bmatrix} -2 & 1 \\ \frac{3}{2} & -\frac{1}{2} \end{bmatrix}$$
## Exercise
Perform the following matrix operations:
$$A = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix}$$
$$B = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix}$$
1. Calculate $A + B$
2. Calculate $A - B$
3. Calculate $A \cdot B$
4. Calculate $A^{-1}$
### Solution
1. $A + B = \begin{bmatrix} 2 + 1 & 3 + 2 \\ 4 + 3 & 5 + 4 \end{bmatrix} = \begin{bmatrix} 3 & 5 \\ 7 & 9 \end{bmatrix}$
2. $A - B = \begin{bmatrix} 2 - 1 & 3 - 2 \\ 4 - 3 & 5 - 4 \end{bmatrix} = \begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$
3. $A \cdot B = \begin{bmatrix} 2 \cdot 1 + 3 \cdot 3 & 2 \cdot 2 + 3 \cdot 4 \\ 4 \cdot 1 + 5 \cdot 3 & 4 \cdot 2 + 5 \cdot 4 \end{bmatrix} = \begin{bmatrix} 11 & 16 \\ 19 & 28 \end{bmatrix}$
4. $A^{-1} = \begin{bmatrix} -5 & 3 \\ 4 & -2 \end{bmatrix}$
# Introduction to root finding
Root finding is the process of finding the values of a variable that make a function equal to zero. It is a fundamental problem in mathematics and has many applications in science, engineering, and finance.
There are several methods for root finding, including the bisection method, the Newton-Raphson method, and the secant method. These methods differ in terms of their convergence properties and computational cost.
In this section, we will focus on the bisection method, which is a simple and robust method for finding roots of a function. We will also explore some applications of root finding in solving equations and optimization problems.
The bisection method works by repeatedly dividing the interval in which the root is known to exist in half and selecting the subinterval in which the function changes sign. This process is repeated until the interval becomes small enough to approximate the root with the desired level of accuracy.
The bisection method requires an initial interval in which the root is known to exist. This interval is typically determined by evaluating the function at two points and checking if the function changes sign between them.
Once the initial interval is determined, the bisection method proceeds as follows:
1. Divide the interval in half and evaluate the function at the midpoint.
2. If the function is zero at the midpoint, the midpoint is the root.
3. If the function changes sign between the left and right endpoints of the interval, the root is in the left subinterval. Repeat the process with the left subinterval.
4. If the function does not change sign between the left and right endpoints, the root is in the right subinterval. Repeat the process with the right subinterval.
The bisection method continues this process until the interval becomes small enough to approximate the root with the desired level of accuracy.
Let's consider the following example:
$$f(x) = x^2 - 2$$
We want to find the root of this function within the interval $[1, 2]$.
We can use the bisection method to approximate the root of this function. First, we evaluate the function at the endpoints of the interval:
$$f(1) = 1^2 - 2 = -1$$
$$f(2) = 2^2 - 2 = 2$$
Since the function changes sign between the left and right endpoints, we know that the root is in the interval $[1, 2]$.
Next, we divide the interval in half and evaluate the function at the midpoint:
$$f(1.5) = 1.5^2 - 2 = 0.25$$
Since the function is positive at the midpoint, we know that the root is in the left subinterval $[1, 1.5]$.
We repeat this process with the left subinterval:
$$f(1.25) = 1.25^2 - 2 = -0.4375$$
Since the function changes sign between the left and right endpoints, we know that the root is in the interval $[1.25, 1.5]$.
We continue this process until the interval becomes small enough to approximate the root with the desired level of accuracy.
## Exercise
Use the bisection method to approximate the root of the following function within the interval $[0, 1]$:
$$f(x) = x^3 - x - 1$$
Calculate the approximate value of the root with an accuracy of $10^{-6}$.
### Solution
Approximate value of the root: 1.3247179985046387
# Newton's method
Newton's method is another widely used method for root finding. It is an iterative method that uses the derivative of the function to approximate the root.
The basic idea behind Newton's method is to start with an initial guess for the root and then use the derivative of the function to determine the slope of the tangent line at that point. This slope is then used to estimate the value of the root by finding the x-intercept of the tangent line. By repeating this process, we can refine our estimate of the root until we reach the desired level of accuracy.
Newton's method can converge to the root very quickly, but it requires an initial guess that is close to the root. If the initial guess is too far from the root, the method may not converge or may converge to a different root.
The formula for Newton's method is as follows:
$$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$$
where:
- $x_n$ is the approximation of the root at the nth step
- $f(x_n)$ is the value of the function at the nth step
- $f'(x_n)$ is the derivative of the function at the nth step
To use Newton's method, we need to specify an initial guess for the root and the derivative of the function. We can then iterate through the steps, updating the value of $x_n$ using the formula above.
Let's consider the same example as before:
$$f(x) = x^2 - 2$$
We want to find the root of this function within the interval $[1, 2]$.
We can use Newton's method to approximate the root of this function. First, we choose an initial guess for the root, let's say $x_0 = 1.5$.
Next, we evaluate the function and its derivative at the initial guess:
$$f(x_0) = (1.5)^2 - 2 = 0.25$$
$$f'(x_0) = 2 \cdot 1.5 = 3$$
Using these values, we can update our estimate of the root using the formula for Newton's method:
$$x_1 = x_0 - \frac{f(x_0)}{f'(x_0)} = 1.5 - \frac{0.25}{3} = 1.4166666666666665$$
We repeat this process until we reach the desired level of accuracy.
## Exercise
Use Newton's method to approximate the root of the following function within the interval $[0, 1]$:
$$f(x) = x^3 - x - 1$$
Choose an initial guess of $x_0 = 1$.
Calculate the approximate value of the root with an accuracy of $10^{-6}$.
### Solution
Approximate value of the root: 1.324717957244746
# Scipy and its applications
Scipy is a powerful scientific computing library in Python that provides many tools and functions for numerical computing. It is built on top of Numpy and provides additional functionality for optimization, interpolation, integration, linear algebra, and more.
In this section, we will explore some of the capabilities of Scipy and its applications in solving scientific and engineering problems.
Scipy provides a wide range of functions for numerical integration, including both analytical and numerical methods. These functions can be used to calculate definite integrals, solve differential equations, and perform other numerical computations.
For example, Scipy's `quad` function can be used to calculate the definite integral of a function. The `quad` function takes as input the function to be integrated, the lower and upper limits of integration, and returns the value of the definite integral.
Scipy also provides functions for solving systems of linear equations, finding roots of equations, and performing optimization. These functions can be used to solve a wide range of scientific and engineering problems.
Let's consider the following example:
$$\int_0^5 \sin(x) \cos(x^2) + 1 dx$$
We can use Scipy's `quad` function to calculate the value of this definite integral. First, we need to define the function to be integrated:
```python
import numpy as np
from scipy.integrate import quad
def f(x):
return np.sin(x) * np.cos(x**2) + 1
```
Next, we can use the `quad` function to calculate the definite integral:
```python
result, error = quad(f, 0, 5)
```
The `quad` function returns two values: the value of the definite integral and an estimate of the error in the result. In this case, the value of the definite integral is stored in the variable `result`.
## Exercise
Use Scipy's `quad` function to calculate the value of the definite integral:
$$\int_0^5 x^2 dx$$
Calculate the value of the definite integral and store it in the variable `result`.
### Solution
Value of the definite integral: 41.666666666666664
|
Textbooks
|
Political Behavior
The Effect of Political Competition on Democratic Accountability
Philip Edward Jones
First Online: 01 June 2012
Representing uncompetitive, homogeneous constituencies is increasingly the norm for American legislators. Extensive research has investigated how competition affects the way representatives respond to their constituents' policy preferences. This paper explores competition's effect on the other side of representation, how constituents respond to their legislators' policy record. Combining multiple measures of state competitiveness with large-N survey data, I demonstrate that competition enhances democratic accountability. Voters in competitive states are more interested in politics, more aware of the policy positions their U.S. senators have taken, and more likely to hold them accountable for those positions at election time. Robustness checks show that these effects are not due to the intensity of campaigning in a state: general competition, not particular campaign activities, drives citizens' response. The recent increase in uncompetitive constituencies has likely lessened the degree to which legislators are held accountable for their actions in office.
Accountability Competition Heterogeneity Representation
The online version of this article (doi: 10.1007/s11109-012-9203-3) contains supplementary material, which is available to authorized users.
An erratum to this article can be found at http://dx.doi.org/10.1007/s11109-012-9218-9.
I thank Steve Ansolabehere, Barry Burden, Claudine Gay, Jason Mycoff, Joe Pika, Meg Rithmire, Sid Verba, and in particular Ben Bishin for helpful comments on previous versions of this paper. I am also grateful to the journal's three anonymous reviewers who helped clarify and strengthen the argument substantially. All errors are, of course, my own.
11109_2012_9203_MOESM1_ESM.pdf (77 kb)
Appendix: Model Specifications
Interest model
In the models predicting constituents' interest in politics (shown in Table 1), all of the coefficients except for the intercept are fixed. The individual-level model can be written as follows:
$$ \begin{aligned} \hbox{Political interest}_{ij} =& \beta_{0j} + \beta_{1j}(\hbox{Female})_{ij} + \beta_{2j}(\hbox{Age})_{ij} + \beta_{3j}(\hbox{Income})_{ij} + \beta_{4j}(\hbox{Black})_{ij} + \beta_{5j}(\hbox{Hispanic})_{ij} + \beta_{6j}(\hbox{Other race})_{ij}\\ &\quad + \beta_{7j}(\hbox{High school})_{ij} + \beta_{8j}(\hbox{Some college})_{ij} + \beta_{9j}(\hbox{College})_{ij} + \beta_{10j}(\hbox{Post-college})_{ij} + r_{ij} \end{aligned} $$
where i indexes individuals, j indexes states, and r ij represents the residual for individual i in state j. At the state level, β0j is modeled as a function of several state-level variables (as an example, for Model 1(a) in Table 1):
$$ \begin{aligned} \beta_{0j} =& \gamma_{00} + \gamma_{01}(\hbox{Turnout 2004})_{j} + \gamma_{02}(\hbox{High school graduates})_{j} + \gamma_{03}(\hbox{Electoral competition})_{j} + \mu_{0j}\\ \hbox{and} & \quad \beta_{pj} = \gamma_{p0}\quad \hbox{for}\,\, \hbox{p}=1-10 \end{aligned} $$
The full model is obtained by substituting the second model into the first one. Since the dependent variable is a categorical variable, I use an ordered logistic regression model (see Bauer and Sterba 2011). The knowledge models in Table 2 and the congruence models in Table 3 take the same approach, using a Poisson and least squares estimator respectively.
Vote choice model
In the models predicting constituents' vote choice for or against the incumbent senator (shown in Table 4), all of the coefficients except for the intercept and policy congruence are fixed. The individual-level model can be written as follows:
$$ \begin{aligned} \hbox{Vote choice}_{ij} =& \beta_{0j} + \beta_{1j}(\hbox{Policy congruence})_{ij} + \beta_{2j}(\hbox{Co-partisan})_{ij} + \beta_{3j}(\hbox{Independent})_{ij} + \beta_{4j}(\hbox{Don't know senator's party})_{ij}\\ &\quad+ \beta_{5j}(\hbox{Economy gotten worse})_{ij} + \beta_{6j}(\hbox{Economy stayed same})_{ij} + \beta_{7j}(\hbox{Economy gotten better})_{ij} \\ &\quad + \beta_{8j}(\hbox{Economy gotten much better})_{ij} + \beta_{9j}(\hbox{Don't know economy})_{ij} + \beta_{10j}(\hbox{Iraq war not a mistake})_{ij} \\ &\quad+ \beta_{11j}(\hbox{Don't know Iraq war})_{ij} + r_{ij} \end{aligned} $$
where i indexes individuals, j indexes states, and r ij represents the residual for individual i in state j. At the state level, β0j and β1j are modeled as a function of several state-level variables (as an example, for Model 1(a) in Table 4):
$$ \begin{aligned} \beta_{0j} =& \gamma_{00} + \gamma_{01}(\hbox{Turnout 2004})_{j} + \gamma_{02}(\hbox{High school graduates})_{j} + \gamma_{03}(\hbox{GOP senator})_{j} + \gamma_{04}(\hbox{Electoral competition})_{j} + \mu_{0j}\\ \beta_{1j} =& \gamma_{10} + \gamma_{11}(\hbox{Turnout 2004})_{j} + \gamma_{12}(\hbox{High school graduates})_{j} + \gamma_{13}(\hbox{GOP senator})_{j} + \gamma_{14}(\hbox{Electoral competition})_{j} + \mu_{1j}\\ \hbox{and}& \quad \beta_{pj} = \gamma_{p0}\quad \hbox{for}\,\, \hbox{p}=2-11 \end{aligned} $$
The full model is obtained by substituting the second model into the first one.
Abramowitz, A. I., Alexander, B., & Gunning, M. (2006). Incumbency, redistricting, and the decline of competition in U.S. house elections. Journal of Politics, 68(1), 75–88.Google Scholar
Aistrup, J. A. (2004). Constituency diversity and party competition: A county and state level analysis. Political Research Quarterly, 57(2), 267–281.Google Scholar
Alesina, A., & Ferrara, E. L. (2000). Participation in heterogeneous communities. Quarterly Journal of Economics, 115(3), 847–904.CrossRefGoogle Scholar
Alesina, A., Baqir, R., & Easterly, W. (1999). Public goods and ethnic divisions. Quarterly Journal of Economics, 114, 1243–1284.Google Scholar
Anderson, C. J., & Paskeviciute, A. (2006). How ethnic and linguistic heterogeneity influence the prospects for civil society: A comparative study of citizenship behavior. Journal of Politics, 68(4), 783–802.Google Scholar
Ansolabehere, S., & Jones, P. E. (2010). Constituents' responses to congressional roll call voting. American Journal of Political Science, 54(3), 583–597.CrossRefGoogle Scholar
Arnold, R. D. (1990). The logic of congressional action. New Haven: Yale University Press.Google Scholar
Bailey, M., & Brady, D. W. (1998). Heterogeneity and representation: The senate and free trade. American Journal of Political Science, 42(2), 524–544.CrossRefGoogle Scholar
Bauer, D. J., & Sterba, S. K. (2011). Fitting multilevel models with ordinal outcomes: Performance of alternative specifications and methods of estimation. Psychological Methods, 16(4), 373–390.CrossRefGoogle Scholar
Bishin, B. G. (2003). Democracy, heterogeneity and representation: Explaining representational differences across states. Legislative Studies Section Newsletter, 26(1).Google Scholar
Bishin, B. G., Dow, J. K., & Adams, J. (2006). Does democracy "suffer" from diversity? Issue representation and diversity in senate elections. Public Choice, 129, 201–215.CrossRefGoogle Scholar
Bishop, B. (2008). The big sort. Houghton Mifflin.Google Scholar
Bond, J. R. (1983). The influence of constituency diversity on electoral competition in voting for congress, 1974–1978. Legislative Studies Quarterly, 8(2), 201–217.CrossRefGoogle Scholar
Browne, W. J., Subramanian, S. V., Jones, K., & Goldstein, H. (2005). Variance partitioning in multilevel logistic models that exhibit overdispersion. Journal of the Royal Statistical Society, 168(3), 599–613.CrossRefGoogle Scholar
Brunell, T. L. (2008). Redistricting and representation: Why competitive elections are bad for America. Routledge.Google Scholar
Bullock, C. S., & Brady, D. W. (1983). Party, constituency, and roll-call voting in the U.S. senate. Legislative Studies Quarterly, 8(1), 29–43.CrossRefGoogle Scholar
Campbell, D. E. (2006). Why we vote: How schools and communities shape our civic life. Princeton University Press.Google Scholar
Campbell, J. E., & Jurek, S. J. (2003). The Decline of competition and change in congressional elections. In S. Ahuja & R. Dewhirst (Eds.), Congress responds to the twentieth century. Ohio State University Press.Google Scholar
Costa, D. L., & Kahn, M. E. (2003). Civic engagement and community heterogeneity: An economist's perspective. PS: Political Science and Politics, 1(1), 103–111.Google Scholar
Dennis, C., Bishin, B. G., & Nicolaou, P. (2000). Constituent diversity and congress: The case of NAFTA. Journal of Socio-Economics, 29(4), 349–360.CrossRefGoogle Scholar
Elston, D. A., Moss, R., Boulinier, T., Arrowsmith, C., & Lambin, X. (2001). Analysis of aggregation, a worked example: numbers of ticks on red grouse chicks. Parasitology, 122, 563–569.CrossRefGoogle Scholar
Ensley, M. J., Thomas, M. W., & de Marchi, S. (2009). District complexity as an advantage in congressional elections. American Journal of Political Science, 53(4), 990–1005.CrossRefGoogle Scholar
Ferejohn, J. (1986). Incumbent performance and electoral control. Public Choice, 50(1–3), 5–25.CrossRefGoogle Scholar
Fieldhouse, E., & Cutts, D. (2010). Does diversity damage social capital? A comparative study of neighbourhood diversity and social capital in the US and Britain. Canadian Journal of Political Science, 43(2), 289–318.CrossRefGoogle Scholar
Fiorina, M. P. (1974). Representatives, roll calls, and constituencies. Lexington, MA: Lexington Books.Google Scholar
Fiorina, M. P. (1981). Retrospective voting in American national elections. New Haven, CT: Yale University Press.Google Scholar
Fitzgerald, J., & Curtis, K. A. (2012). Partisan discord in the family and political engagement: A comparative behavioral analysis. Journal of Politics, 74(1), 129–141.CrossRefGoogle Scholar
Gastil, J., & Dillard, J. P. (1999). Increasing political sophistication through public deliberation. Political Communication, 16(1), 3–23.CrossRefGoogle Scholar
Gelman, A., & Hill, J. (2007). Data analysis using regression and multilevel/hierarchical models. Cambridge University Press.Google Scholar
Gerber, E. R., & Lewis, J. B. (2004). Beyond the median: Voter prederences, district heterogeneity, and political representation. Journal of Political Economy, 112(6), 1364–1383.CrossRefGoogle Scholar
Gimpel, J. G., Lay, J. C., & Schuknecht, J. E. (2003). Cultivating democracy: Civic environments and political socialization in America. Brookings Institution.Google Scholar
Glaser, J. M. (2003). Social context and inter-group political attitudes: Experiments in group conflict theory. British Journal of Political Science, 33, 607–620.CrossRefGoogle Scholar
Griffin, J. D. (2006). Electoral competition and democratic responsiveness: A defense of the marginality hypothesis. Journal of Politics, 68(4), 911–921.Google Scholar
Gronke, P. (2001). The electorate, the campaign, and the office: A unified approach to senate and house elections. University of Michigan Press.Google Scholar
Gulati, G. J. (2004). Revisiting the link between electoral competition and policy extremism in the U.S. congress. American Politics Research, 32(5), 495–520.CrossRefGoogle Scholar
Habyarimana, J., Humphreys, M., Posner, D. N., & Weinstein, J. M. (2007). Why does ethnic diversity undermine public goods provision? American Political Science Review, 101(4), 709–725.CrossRefGoogle Scholar
Huckfeldt, R., Mendez, J. M., & Osborn, T. (2004). Disagreement, ambivalence, and engagement: The political consequences of heterogeneous networks. Political Psychology, 25(1), 65–95.CrossRefGoogle Scholar
Hutchings, V. L. (2003). Public opinion and democratic accountability: How citizens learn about politics. Princeton: Princeton University Press.Google Scholar
Jacobson, G. C. (2004). The politics of congressional elections. New York: Pearson Longman.Google Scholar
Jones, D. R. (2003). Position taking and position avoidance in the U.S. senate. Journal of Politics, 65(3), 851–863.Google Scholar
Jones, P. E. (2011). Which buck stops here? Accountability for policy positions and policy outcomes in congress. Journal of Politics, 73(3), 764–782.CrossRefGoogle Scholar
Kahn, K. F., & Kenney, P. J. (1999). The spectacle of U.S. senate campaigns. Princeton University Press.Google Scholar
Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2), 263–292.CrossRefGoogle Scholar
Kuklinski, J. H. (1977). District competitiveness and legislative roll-call behavior: A reassessment of the marginality hypothesis. American Journal of Political Science, 21(3), 627–638.CrossRefGoogle Scholar
La Raja, R. (2009). Redistricting: Reading between the lines. Annual Review of Political Science, 12, 203–223.CrossRefGoogle Scholar
Levendusky, M. S., & Pope, J. C. (2010). Measuring aggregate-level ideological heterogeneity. Legislative Studies Quarterly, 35(2), 259–282.CrossRefGoogle Scholar
Lodge, M., & Hamill, R. (1986). A partisan schema for political information processing. American Political Science Review, 80, 505–519.CrossRefGoogle Scholar
Luttmer, E. F. P. (2001). Group loyalty and the taste for redistribution. Journal of Political Economy, 109(3), 6–34.CrossRefGoogle Scholar
McAtee, A., & Wolak, J. (2011). Why people decide to participate in state politics. Political Research Quarterly, 64(1), 45–58.CrossRefGoogle Scholar
Mutz, D. C. (2002). Cross-cutting social networks: Testing democratic theory in practice. American Political Science Review, 96(1), 111–126.CrossRefGoogle Scholar
Oliver, J. E. (2001). Democracy in suburbia. Princeton University Press.Google Scholar
Oliver, J. E., & Ha, S. E. (2007). Vote choice in suburban elections. American Political Science Review, 101(3), 393–408.CrossRefGoogle Scholar
Pacheco, J. S. (2008). Political socialization in context: The effect of political competition on youth voter turnout. Political Behavior, 30, 415–436.CrossRefGoogle Scholar
Pildes, R. H. (2006). The constitution and political competition. New York University Public Law and Legal Theory Working Papers. Paper 27.Google Scholar
Putnam, R. D. (2007). E Pluribus Unum: Diversity and community in the twenty-first century. Scandinavian Political Studies, 30(2), 137–174.CrossRefGoogle Scholar
Rahn, W. M. (1993). The role of partisan stereotypes in information processing about political candidates. American Journal of Political Science, 37(2), 472–496.CrossRefGoogle Scholar
Samples, J., & McDonald, M. P. (Eds.) (2006). The marketplace of democracy: Electoral competition and American politics. Brookings Institution.Google Scholar
Scheufele, D. A., Hardy, B. W., Brossard, D., Waismel-Manor, I. S., & Nisbet, E. (2006). Democracy based on difference: Examining the links between structural heterogeneity, heterogeneity of discussion networks, and democratic citizenship. Journal of Communication, 56, 728–753.CrossRefGoogle Scholar
Shapiro, C. R., Brady, D. W., Brody, R. A., & Ferejohn, J. A. (1990). Linking constituency opinion and senate voting scores: A hybrid explanation. Legislative Studies Quarterly, 15(4), 599–621.CrossRefGoogle Scholar
Sinclair, B. (1990). Washington behavior and home-state reputation: The impact of national prominence on senators' images. Legislative Studies Quarterly, 15(4), 475–494.CrossRefGoogle Scholar
Sullivan, J. L. (1973). Political correlates of social, economic and religious diversity in the American states. Journal of Politics, 35, 70–84.CrossRefGoogle Scholar
Vavreck, L., & Rivers, D. (2008). The 2006 cooperative congressional election study. Journal of Elections, Public Opinion, and Parties, 18(4), 355–366.CrossRefGoogle Scholar
Verba, S., Schlozman, K. L., & Brady, H. E. (1995). Voice and equality: Civic voluntarism in American politics. Cambridge: Harvard University Press.Google Scholar
Weissberg, R. (1979). Assessing legislator-constituency policy agreement. Legislative Studies Quarterly, 4, 605–622.CrossRefGoogle Scholar
1.Department of Political Science and International RelationsUniversity of DelawareNewarkUSA
Jones, P.E. Polit Behav (2013) 35: 481. https://doi.org/10.1007/s11109-012-9203-3
First Online 01 June 2012
|
CommonCrawl
|
Simple-homotopy equivalence
In mathematics, particularly the area of topology, a simple-homotopy equivalence is a refinement of the concept of homotopy equivalence. Two CW-complexes are simple-homotopy equivalent if they are related by a sequence of collapses and expansions (inverses of collapses), and a homotopy equivalence is a simple homotopy equivalence if it is homotopic to such a map.
The obstruction to a homotopy equivalence being a simple homotopy equivalence is the Whitehead torsion, $\tau (f).$
A homotopy theory that studies simple-homotopy types is called simple homotopy theory.
See also
• Discrete Morse theory
References
• Cohen, Marshall M. (1973), A course in simple-homotopy theory, Berlin, New York: Springer-Verlag, ISBN 978-3-540-90055-9, MR 0362320
|
Wikipedia
|
\begin{document}
\title{A Fast Algorithm for Robust Regression with Penalised Trimmed Squares}
\author{L. Pitsoulis and G. Zioutas \\
Department of Mathematical and Physical Sciences \\
Aristotle University of Thessaloniki \\
541 24 Thessaloniki, Greece }
\maketitle
\begin{abstract} \noindent The presence of groups containing high leverage outliers makes linear regression a difficult problem due to the masking effect. The available high breakdown estimators based on Least Trimmed Squares often do not succeed in detecting masked high leverage outliers in finite samples.
An alternative to the LTS estimator, called Penalised Trimmed Squares (PTS) estimator, was introduced by the authors in~\cite{ZiouAv:05,ZiAvPi:07} and it appears to be less sensitive to the masking problem. This estimator is defined by a Quadratic Mixed Integer Programming (QMIP) problem, where in the objective function a penalty cost for each observation is included which serves as an upper bound on the residual error for any feasible regression line. Since the PTS does not require presetting the number of outliers to delete from the data set, it has better efficiency with respect to other estimators. However, due to the high computational complexity of the resulting QMIP problem, exact solutions for moderately large regression problems is infeasible.
In this paper we further establish the theoretical properties of the PTS estimator, such as high breakdown and efficiency, and propose an approximate algorithm called Fast-PTS to compute the PTS estimator for large data sets efficiently. Extensive computational experiments on sets of benchmark instances with varying degrees of outlier contamination, indicate that the proposed algorithm performs well in identifying groups of high leverage outliers in reasonable computational time.
\vspace*{.25in} \noindent {\bf Keywords}: Robust regression; quadratic mixed integer programming; least trimmed squares; outliers detection. \end{abstract}
\setcounter{tocdepth}{4} \tableofcontents
\section{Introduction} \label{Sec1}
In multilinear regression models experimental data often contains outliers and bad influential observations, due to errors. It is important to identify these observations and eliminate them from the data set, since they can lead the regression estimate to take erroneous values. If the data is contaminated with a single or few outliers the problem of identifying such observations is not difficult, but in most cases data sets contain groups of outliers which makes the problem more difficult due to masking and swamping effects. An indirect approach to outlier identification is through a robust regression estimate. If a robust estimate is relatively unaffected from outliers, then the residuals from the robust fit should be used to identify the outliers.
Two commonly used criteria for robustness are the {\em breakdown point} and {\em efficiency} of the estimator. The breakdown point can be roughly defined as the minimum percentage of outliers present in the data, that could lead the error between the robust estimate and the hypothetical true estimate to be infinitely large. It is desirable for a robust estimator to have a breakdown point of close to 50\% of the sample size.
A well known estimator with high breakdown point is the Least Trimmed Squares (LTS) estimator of Rousseeuw and Leroy~\cite{RouLer:87}. The method consists of finding a ${n-k}$ subset of observations whose deletion from the data set would lead to the $k$ smallest residual sum of squares. However, it is well known that the LTS loses efficiency. Several robust estimators have been proposed in the literature as extensions to the LTS, to obtain high breakdown point and simultaneously improve the efficiency. Among them are the S-estimator proposed by Rousseeuw and Yohai~\cite{RouYoh:84}, the MM-estimator proposed by Yohai~\cite{Yohai:87}, the $r-$estimator proposed by Yohai and Zamar~\cite{YohZam:88}, the S1S estimator proposed by Coakley and Hettmansperger~\cite{CoaHet:93}, WLSE computed from an initial high breakdown robust estimator by Agostinelli and Markatou~\cite{AgoMar:98} and the REWLS by Gervini and Yohai~\cite{GerYoh:02}. Most of these estimators start from an initial robust scale $\sigma_{LTS}$ and robust regression coefficient estimate $\boldsymbol{\beta}_{LTS}$ given from the LTS estimator with coverage $k\simeq[(n+p+1)/2]\simeq 50\%$. Then they compute the standardised residuals using different schemes based on robust scale $\sigma_{LTS}$ and design weights with appropriate cut-off values, and they reconsider which of the potential outliers should remain in the basic subset as clean data and which should be eliminated as outliers. Thus, they simultaneously attain the maximum breakdown point of the LTS estimator while improving its efficiency. The key to the success of all the aforementioned methods is to start with a good initial regression coefficient estimate $\boldsymbol{\beta}_{LTS}$. When a finite sample contains multiple high leverage points, these may be masked outliers and would affect the LTS estimator resulting in a biased initial estimate. Moreover, the LTS method requires a priory knowledge of the {\em coverage} $k$, or equivalently the number $n-k$ of the most likely outliers that produces the largest reduction in the residual sum of squares when deleted. Unfortunately, this knowledge of $k$ is typically unknown, Gentleman and Wilk~\cite{GenWilk:75}. Computation of the LTS estimator requires the solution of a hard combinatorial problem, and there have been many exact and approximation algorithms proposed in the literature (see Section~\ref{subsec_lts}).
A different approach to obtain a robust estimate has been proposed in~\cite{ZiAvPi:07} called Penalised Trimmed Squares (PTS), which does not require presetting the number $n-k$ of outliers to delete from the data set. The PTS approach ``trims'' outliers from the data but instead of discarding a fixed number of observations, a fixed threshold for the allowable size of the adjusted residuals is used. The new estimator PTS is defined by minimising a {\em loss function}, which is the sum of squared residuals and penalty costs for deleting bad observations. These penalties regulate the threshold of the allowable adjusted residuals, as well as the coverage. In order to overcome the problem of groups of masking outliers containing almost identical high leverage points, lower penalties are proposed yielding a smaller adjusted residual threshold for such observations. These penalties are a function of robust leverages resulted from the MCD estimator by Rousseeuw and Van Driessen~\cite{RouDrie:99}. Computationally, the PTS estimator also involves the solution of a hard combinatorial problem.
The purpose of this paper is twofold. First, the robust properties of the PTS estimator as presented in~\cite{ZiAvPi:07} such as equivariance, exact fit property, high breakdown point and efficiency, are established. Secondly we present an efficient approximation algorithm called Fast-PTS, to compute the PTS estimator for large data sets without having to solve the problem exactly.
The organisation of the paper is the following. Section~\ref{Sec2} contains some preliminary notations and definitions. The definition of the PTS estimator, its robust properties, the computation of the penalties and finally an equivalent mixed integer quadratic programming formulation are presented in Section~\ref{sec_pts}. Section~\ref{sec_fast-pts} describes the Fast-PTS algorithm, where initially a set of necessary optimality conditions is given, and the algorithm is then presented in pseudocode. Extensive computational experiments to examine the performance of the Fast-PTS algorithm with respect to, both the quality of the approximate solution when compared to the exact solution of small instances, and the robust performance to a set of benchmark and artificially generated large instances, are given in Section~\ref{sec_comp}. Finally, concluding remarks and further possible extensions to both the algorithm and the robust estimator are given in the last section.
\section{Trimmed Squares Regression}\label{Sec2}
\subsection{Preliminaries} In this section we will state some well established facts in order to set up the notation that will be used for the rest of the paper. We consider the multi-linear regression model with $p$ independent variables \begin{equation} \label{eq_1} \mathbf{y} = \mathbf{X}{\boldsymbol \beta} + \mathbf{u}, \end{equation} where $\mathbf{y} = (y_{1},y_{2}, \ldots , y_{n})^{T}$ is the $n\times 1$ vector of the response variable, $\mathbf{X}$ is a full rank $n\times p$ matrix with rows $\mathbf{x}_{i} =(x_{i1},x_{i2},\ldots ,x_{ip})$ of explanatory variables, $\boldsymbol{\beta} $ is a $p\times 1$ vector $\boldsymbol{\beta} =(\beta _{1}, \beta _{2}, \ldots , \beta _{p})^{T}$ of unknown parameters , and $\mathbf{u}$ is a $n\times 1$ vector $\mathbf{u} = (u_{1}, u_{2}, \ldots , u_{n})^{T}$ of iid random errors with expectation zero and variance $\sigma^2$. We observe a sample $(x_{i1}, x_{i2}, \ldots ,x_{ip}, y_{i})$, for $i=1, 2, \ldots , n$, and construct an estimator for the unknown parameter $\boldsymbol{\beta}$. The Ordinary Least Squares Estimator (OLS) is defined by minimising the squared residual loss function \begin{equation}\label{eq_2} OLS(\mathbf{X,y}) := \arg\min_{\boldsymbol{\beta}} \sum_{i=1}^{n}r(\mathbf{\boldsymbol{\beta}})^{2}_{i} \end{equation} where $r(\boldsymbol{\beta})_{i}$ is the regression residual $r(\boldsymbol{\beta})_{i}:=y_{i}-\mathbf{x}_{i}{\boldsymbol{\beta}}$. We will write $r_{i}$ instead of $r(\boldsymbol{\beta})_{i}$ whenever the parameter vector $\boldsymbol{\beta}$ need not be explicitely stated. It is well known that a solution to (\ref{eq_2}) is obtained in polynomial time (i.e. ${\cal O}(n^3)$) by the normal equations \begin{equation*} OLS(\mathbf{X,y}) = (\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{y} \end{equation*}
A transformation of the residuals that will be useful in this work is the adjusted residual $\alpha_{i}$ which is defined as \begin{equation}\label{eq_adj_residual} \alpha_{i}:=\frac{r_{i}}{\sqrt{1-h_{i}}}, \end{equation} where $h_{i} \;\;(0<h_{i}<1)$ measures the leverage of the $i^{th}$ observation and is defined as $h_{i} := {\mathbf{x}_{i}^{T}}(\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{x}_{i}$. The reduction in the sum of squared residuals in (\ref{eq_2}) due to the deletion of an observation $(\mathbf{x}_{i},y_{i})$ is the square of its adjusted residual \[ \alpha_{i}^2=\frac{r_{i}^{2}}{1-h_{i}}. \] Unfortunately, points that are far from the predicted line (outliers) are overemphasised. There are several types of outliers that can occur in a data set $(\mathbf{X,y})$. Following the terminology of Rousseeuw and Van Zomeren~\cite{RouZom:90}, a point $(\mathbf{x}_{i}, y_{i})$ which does not follow the linear pattern of the majority of the data but whose $\mathbf{x}_{i}$ is not outlying is called a {\em vertical outlier}. A point $(\mathbf{x}_{i}, y_{i})$ whose $\mathbf{x}_{i}$ is outlying (large $h_{i}$) is called a {\em leverage point}. It is called {\em good} leverage point when $(\mathbf{x}_{i}, y_{i})$ follows the pattern of the majority otherwise it is called {\em bad} leverage point. The OLS estimator is very sensitive to outliers, in the sense that the presence of any type of previously mentioned types of outliers greatly affects the solution of (\ref{eq_2}). We wish to construct a robust estimator for the parameter $\boldsymbol{\beta}$, such that the influence of any observation $(\mathbf{x}_{i}, y_{i})$ on the sample estimator is bounded.
\subsection{Least Trimmed Squares}\label{subsec_lts}
Rousseeuw introduced the Least Trimmed Squares (LTS) estimator in~\cite{Rousseeuw:84}, which fits the best subset of $k$ observations, removing the rest $n-k$ observations. The LTS estimator is defined as follows defined as: \begin{eqnarray}\label{eq_LTS} LTS(\mathbf{X,y},k):= \arg\min_{\boldsymbol{\beta}} & & \sum_{i=1}^{k} r(\boldsymbol{\beta})_{(i)}^2 \\ \nonumber
\mbox{s.t.} & & r(\boldsymbol{\beta})_{(1)}^{2} \le r(\boldsymbol{\beta})_{(2)}^{2} \le \cdots \le r(\boldsymbol{\beta})_{(n)}^{2} \end{eqnarray} where $k$ is called the {\em coverage}, and is chosen as $k\geq [(n+p+1)/2]$ a priori, so as to maximise the breakdown point. In (\ref{eq_LTS}) and in what follows, we employ the convention of writing $r(\boldsymbol{\beta})_{(i)}$ for the $i^{\mbox{th}}$ smallest residual error with respect to $\boldsymbol{\beta}$. The LTS estimator has high breakdown point but loses efficiency, since $n-k$ observations have to be removed from the sample even if they are not outliers. Assuming that the set of points $(\mathbf{x}_{i},y_{i})$ are in {\em general position}, that is any two distinct subsets of $p$ points have different regression lines, and observing that for a given $k$ the problem in (\ref{eq_LTS}) consists of solving ${n \choose k}$ ordinary least squares problems, it is immediate that it has a unique solution.
Problem (\ref{eq_LTS}) has been formulated as a nonlinear programming problem with linear constraints and non-convex objective function by Giloni and Padberg in~\cite{GilPad_a:02}, thereby its local minima are well characterised by the Karush-Kuhn-Tucker optimality conditions. In particular the authors in~\cite{GilPad_a:02} establish its equivalency with a concave minimisation problem, which is known to be NP-Hard, and provide a procedure for computing local minima. The exact computation of (\ref{eq_LTS}) requires an exponential number of steps with respect to the size of the problem, therefore it can only be applied to small instances. Agull\'{o} in~\cite{Agullo:01} presented a branch and bound procedure for the exact computation of (\ref{eq_LTS}) along with several procedures to reduce the computational effort, and managed to find the global optimum for several benchmark instances of size $n<50$. Polynomial time algorithms for the exact computation of the LTS for simple ($p=1$) linear regression problems are given by H\"ossjer~\cite{Hossjer:95}, and recently by Li~\cite{Li:05} who utilises advanced data structures to attain improved computational complexity. Due to the high computational complexity of the LTS estimator there have been a plethora of approximation algorithms appeared in the literature with varying success. We mention among others, the Forward Search algorithm by Atkinson~\cite{Atkinson:94,AtkChen:94}, the Feasible Solution algorithm by Hawkins~\cite{Hawkins:94,HawOli:99} and the more recent Fast-LTS algorithm by Rousseeuw and Van Driessen~\cite{RouDrie:06} which currently the most accurate and efficient algorithm.
\section{Penalised Trimmed Squares}\label{sec_pts}
The basic idea of the PTS estimator is to insert fixed penalty costs for each observation into the objective function of (\ref{eq_LTS}), such that only observations whose adjusted residuals is larger than their penalty costs are deleted from the data set. Therefore instead of defining a priori a coverage $k$, the penalty costs are defined and the coverage becomes a decision variable. In the sections that follow suitable penalties for multiple high-leverage outliers are proposed. We consider as most likely outliers that subset of the observations that produces significant reduction in the sum of squared residuals when deleted. The proposed PTS estimator can be defined through a minimisation problem, with an objective function split into two parts; the sum of $k$ squared residuals in the clean data and the sum of the penalties for deleting the rest $n-k$ observations. \begin{eqnarray} PTS(\mathbf{X,y},\mathbf{p}):= \arg\min_{\boldsymbol{\beta},k}& & \sum_{i=1}^{k} r(\boldsymbol{\beta})_{(i)}^{2} + \sum_{i=k+1}^{n} p_{(i)} \label{eq_PTS} \\\nonumber \mbox{s.t.} & & r(\boldsymbol{\beta})_{(1)}^{2} \le r(\boldsymbol{\beta})_{(2)}^{2} \le \cdots \le r(\boldsymbol{\beta})_{(n)}^{2} \end{eqnarray} where $\mathbf{p} = (p_{1},\ldots,p_{n})$ and $p_{i}$ is the {\em penalty} cost for deleting the $i^{\mbox{th}}$ observation defined as \[ p_{i} := \max \{ \epsilon, (c\hat{\sigma})^{2} \} \] for some small $\epsilon > 0$, where $\hat{\sigma}$ is a {\em robust residual scale} and $c$ is the {\em cut-off parameter}. Although these penalties are constant in the basic definition of the PTS, as we shall see later in section~\ref{subsec_penalties} each observation will have an individual penalty. The estimator performance is very sensitive to the penalties defined a priori which regulate the robustness and the efficiency of the estimator. The choice of the robust scale $\hat{\sigma}$ plays an important role in the coverage of the PTS estimator. The minimisation problem (\ref{eq_PTS}) is not convex since it is equivalent to a quadratic mixed integer programming problem, nevertheless there exists a unique solution under mild assumptions on the data (see section \ref{Sec3}).
Let us define with $f_{PTS}(\boldsymbol{\beta},k)$ the objective function of (\ref{eq_PTS}). We have the following straightforward observations regarding the optimisation problem in (\ref{eq_PTS}), where the set of feasible solutions is all $(\boldsymbol{\beta},k)$ for $\boldsymbol{\beta}\in \mathbb{R}^{p}$ and $k=0,\ldots,n$: \begin{itemize} \item[i)] For any feasible solution $(\boldsymbol{\beta},k)$ there exists an associated set of $k$ observations as determined by the
ordering \[ r(\boldsymbol{\beta})_{(1)}^{2} \le r(\boldsymbol{\beta})_{(2)}^{2} \le \cdots \le r(\boldsymbol{\beta})_{(k)}^{2} \le \cdots \le r(\boldsymbol{\beta})_{(n)}^{2}. \] \item[ii)] For any $\boldsymbol{\beta}\in \mathbb{R}^{p}$ there exists $k^{*}$ such that \[ f_{PTS}(\boldsymbol{\beta},k^{*}) \leq f_{PTS}(\boldsymbol{\beta},k), \;\;\forall k=0,\ldots,n, \] where $k^{*}$ is determined by those observations with squared residuals w.r.t. $\boldsymbol{\beta}$ less than their respective penalties. Therefore at optimality, $\boldsymbol{\beta}\in \mathbb{R}^{p}$ determines $k$ uniquely. \item[iii)] Since the second term of the objective function in (\ref{eq_PTS}) is independent of $\boldsymbol{\beta}$, for any feasible solution $(\boldsymbol{\beta},k)$ \[ f_{PTS}(\boldsymbol{\beta}_{OLS},k) \leq f_{PTS}(\boldsymbol{\beta},k), \] where $\boldsymbol{\beta}_{OLS}$ is the ordinary least squares estimate obtained for the set of observations so defined by $(\boldsymbol{\beta},k)$. \end{itemize} We summarise the above in the following lemma which will be used later. \begin{lemma}\label{lemma_pts_ols} If $(\boldsymbol{\beta}_{PTS}, k_{PTS})$ is the solution obtained by (\ref{eq_PTS}) for sample data $(\mathbf{X,y})$ then \[ OLS(\mathbf{X}_{k_{PTS}},\mathbf{y}_{k_{PTS}}) = \boldsymbol{\beta}_{PTS}, \] where $(\mathbf{X}_{k_{PTS}},\mathbf{y}_{k_{PTS}} )$ is the submatrix and subvector of $(\mathbf{X,y})$ respectively, as determined by the set of $k_{PTS}$ observations defined by $(\boldsymbol{\beta}_{PTS}, k_{PTS})$. \end{lemma}
In view of the above lemma and (\ref{eq_adj_residual}) we can now state the {\bf general principle} of the PTS estimator, that is to delete an observation $i$ if its squared adjusted residual is larger its the penalty cost \begin{equation}\label{eq_20} \alpha_{i}^2 > (c\hat{\sigma})^{2}. \end{equation} Therefore, the adjusted residual has as a threshold the square root of the deleting penalty $c\hat{\sigma}$. Given that the PTS estimate is the OLS of the clean data set, the resulted adjusted residuals must be smaller or equal than the penalties \begin{equation}\label{eq_21}
\frac{|r_{i}|}{\sqrt{1-h_{i}^{*}}} < c\hat{\sigma}, \;\; 0\leq i\le k. \end{equation} where $h_{i}^{*}$ is the leverage of the $i^{th}$ observation in the $k$ observations. Otherwise, the $i^{th}$ observation can not remain in the coverage sample due to the PTS principal.
The PTS as defined in (\ref{eq_PTS}) ``trims'' outliers from the data but instead of discarding a fixed number of observations, a fixed threshold $(c\hat{\sigma})$ for the allowable size of the adjusted residuals is used.
\subsection{Properties of the PTS} Most of the robust properties of the PTS estimator are inherited by the use of the robust scale $\hat{\sigma}$, which in our case will be obtained from the LTS estimator with coverage $(n+p+1)/2$. In what follows we will employ the notation used in~\cite{RouLer:87}.
Let $T(\mathbf{X},\mathbf{y})$ be an estimator for some sample data $(\mathbf{X,y})$.
An estimator $T$ is called {\em regression equivariant} if \[ T(\mathbf{X}, \mathbf{y}+\mathbf{Xv}) = T(\mathbf{X,y}) + \mathbf{v}, \;\;\;\mbox{for any}\;\;\mathbf{v}\in\mathbb{R}^{n}, \] {\em scale equivariant} if \[ T(\mathbf{X},c\mathbf{y}) = c T(\mathbf{X,y}), \;\;\;\mbox{for any}\;\;c\in\mathbb{R}, \] and {\em affine equivariant} if \[ T(\mathbf{XA,y}) = \mathbf{A}^{-1} T(\mathbf{X,y}), \;\;\;\mbox{for any nonsingular}\;\;\mathbf{A}\in\mathbb{R}^{p\times p}. \]
\begin{lemma}\label{lem_equivariance} The PTS estimator is regression, scale and affine equivariant. \end{lemma} \begin{proof} For regression equivariance, by Lemma~\ref{lemma_pts_ols} there will exist some subset of $k$ observations were \begin{eqnarray*} PTS(\mathbf{X}, \mathbf{y}+\mathbf{Xv}, \mathbf{p}) & = & (\mathbf{X}_{k}^{T}\mathbf{X}_{k})^{-1}\mathbf{X}_{k}^{T}(\mathbf{y}_{k} + \mathbf{X}_{k}\mathbf{v}) \\
& = & (\mathbf{X}_{k}^{T}\mathbf{X}_{k})^{-1}\mathbf{X}_{k}^{T}\mathbf{y}_{k} + \mathbf{v} \\
& = & PTS(\mathbf{X}, \mathbf{y}, \mathbf{p}) + \mathbf{v}. \end{eqnarray*} Similarly for scale and affine equivariance. \end{proof}
The PTS estimator determines that subset of observations whose deletion produces the largest reduction in the sum of squared residuals. The LTS estimator determines the subset of size $k$ with the minimum sum of squared residuals. The relationship between the two estimators can be seen by the following propositions.
\begin{proposition}\label{Pro1} If the PTS estimator for sample data $(\mathbf{X,y})$ converges to the solution $(\boldsymbol{\beta}_{PTS},k_{PTS})$, then $LTS(\mathbf{X,y},k_{PTS}) = \boldsymbol{\beta}_{PTS}$. \end{proposition} \begin{proof} Observe that any $k$ observations will have the same sum of penalties, therefore we will have \begin{eqnarray*} LTS(\mathbf{X,y},k_{PTS})& = & \arg\min_{\boldsymbol{\beta}} \sum_{i=1}^{k_{PTS}} r(\boldsymbol{\beta})_{(i)}^2 \\
& = & \arg\min_{\boldsymbol{\beta}} \left( \sum_{i=1}^{k_{PTS}} r(\boldsymbol{\beta})_{(i)}^2 + (n-k_{PTS}) c\hat{\sigma} \right) \\
& = & \boldsymbol{\beta}_{PTS} \end{eqnarray*} \end{proof}
Conversely under mild assumptions, from the LTS estimator given in (\ref{eq_LTS}) and using the largest residual $r_{k}$ as the deleting penalty, the PTS estimator given in (\ref{eq_PTS}) leads to the same results. \begin{proposition}\label{Pro2} If for sample data $(\mathbf{X,y})$ and coverage $k$ we have $LTS(\mathbf{X,y},k)=\boldsymbol{\beta}_{LTS}$ while the largest residual $r(\boldsymbol{\beta}_{LTS})_{k}$ is an increasing function with respect to $k$, then for fixed penalty $p_{i}=(r(\boldsymbol{\beta}_{LTS})_{k})^2,\;\;\; i=1,\ldots,n,$ we have $PTS(\mathbf{X,y,p})=\boldsymbol{\beta}_{LTS}$. \end{proposition} \begin{proof} Let $PTS(\mathbf{X,y,p})=(\boldsymbol{\beta}_{PTS},k_{PTS})$. By Proposition~\ref{Pro1} enough to show that $k_{PTS}=k$. Moreover the largest residual $r(\boldsymbol{\beta}_{PTS})_{k_{PTS}}$ will also be an increasing function with respect to $k_{PTS}$. We will have for any $k \geq k_{PTS}$ \begin{eqnarray*} \sum_{i=1}^{k_{PTS}} r_{i}^{2} + \sum_{i=k_{PTS}}^{n} p_{i} & = & \sum_{i=1}^{k}r_{i}^{2} - \sum_{i=k_{PTS}}^{k} r_{i}^{2} + \sum_{i=k}^{n}p_{i}+ \sum_{i=k_{PTS}}^{k} p_{i} \\ & = & \sum_{i=1}^{k}r_{i}^{2} + \sum_{i=k}^{n}p_{i} + \sum_{i=k_{PTS}}^{k} (p_{i} - r_{i}^{2} ) \\ \end{eqnarray*} where the terms in the last summation are all nonnegative since $p_{i} \geq (r(\boldsymbol{\beta}_{LTS})_{i})^2,$ $i=k_{PTS},\ldots,k$. Therefore the minimum is obtained for $k=k_{PTS}$. Similarly for $k\leq k_{PTS}$. \end{proof}
In the LTS estimator the coverage $k$ has to be chosen between $n/2$ and $n$. For coverage $k=[(n+p+1)/2]$ the maximum breakdown point is obtained by loosing efficiency whereas for $k=n$ the obtained breakdown value is reduced to $1/n$. As it is the case with most robust estimators, there is a trade-off between robustness and efficiency. The advantage of the PTS procedure is that there is no need to define a priory the coverage $k$. For given penalty $(c\hat{\sigma})^2$, where $\hat{\sigma}$ is a high breakdown robust scale, the PTS can resist against all influential observations with large adjusted residuals (i.e. $\alpha_{i} > c \hat{\sigma}$).
\subsubsection{Exact Fit Property and Breakdown Point}\label{sssec_exact} In this section we will investigate two common robust properties for the PTS estimator, namely the exact fit and breakdown points. The {\em exact fit point} of an estimator $T$ is defined as the minimum fraction of observations on a hyperplane, necessary to guarantee that the estimator coincides with that hyperplane~\cite{YohZam:88}. Formally \begin{eqnarray}\label{eq_exact_fit_point} \delta^{*}(T,\mathbf{X,y}) & := & \min \left\{ \frac{m}{n} : \exists \boldsymbol{\beta} \;\;\mbox{such that}\;\; \mathbf{X}_{m}\boldsymbol{\beta} = \mathbf{y}_{m}, T(\mathbf{X,y}) = \boldsymbol{\beta} \right\}, \end{eqnarray} where $(\mathbf{X}_{m},\mathbf{y}_{m})$ is the submatrix and subvector of $(\mathbf{X},\mathbf{y})$ respectively, as determined by the set of $m$ observations. A dual notion is used in~\cite{RouLer:87}, where the exact fit point is taken as the minimum fraction of observations {\em not} on a hyperplane that will force the estimator to give a different estimate from that hyperplane. An estimator is said to have the {\em exact fit property} when $\delta^{*} \leq 0.5$.
The {\em breakdown point} of an estimator $T$ is defined as the minimum fraction of observations that could take arbitrary values and force the estimator to vary indefinitely from its original estimate. This is the finite version of breakdown point as it was introduced in~\cite{DonHub:83}, and could be stated formally as \begin{eqnarray}\label{eq_breakdown_point}
\epsilon^{*}(T,\mathbf{X,y}) & := & \min \left\{ \frac{m}{n} : \sup_{\mathbf{X}_{m}, \mathbf{y}_{m}} \| T(\mathbf{X,y}) - T(\mathbf{X}_{m},\mathbf{y}_{m}) \| = \infty \right\}, \end{eqnarray} where $(\mathbf{X}_{m},\mathbf{y}_{m})$ is the submatrix and subvector of $(\mathbf{X},\mathbf{y})$ respectively, as determined by the set of $m$ observations. The relationship between exact fit point and breakdown point for regression equivariant estimators is \begin{equation}\label{eq_exact_breakdown} \delta^{*} \geq 1 - \epsilon^{*} \end{equation} where under mild assumptions equality also holds.
\begin{proposition}\label{pro_exact_fit} If for sample data $(\mathbf{X,y})$ there exists some $\boldsymbol{\beta}$ such that strictly more than $\frac{n+p-1}{2}$ of the observations satisfy $\mathbf{x}_{i}\boldsymbol{\beta} = y_{i}$ and are in general position, then the PTS solution with robust scale $\sigma_{LTS}$ equals $\boldsymbol{\beta}$. \end{proposition} \begin{proof} Since the above is true for the LTS estimator (see corollary in page 134 of~\cite{RouLer:87}), we will have $\sigma_{LTS}=0$. Therefore the penalties as defined in (\ref{eq_PTS}) will be $\mathbf{p}=(\epsilon,\ldots,\epsilon)$ and at optimality it will hold \[ PTS(\mathbf{X,y,p}) = LTS(\mathbf{X,y},k_{LTS})= \boldsymbol{\beta}, \] for $k_{LTS}= \frac{n+p+1}{2}$. \end{proof} It is known~\cite{RouLer:87} that for regression and scale equivariant estimators it holds \[ \epsilon^{*} \leq \frac{n-p+2}{2n}. \] Below we show that the PTS estimator has a high breakdown point. \begin{proposition}\label{pro_breakdown} The breakdown point of the PTS estimator with robust scale $\sigma_{LTS}$ is bounded from below as \[ \epsilon_{PTS}^{*} \geq \frac{n-p-1}{2n}. \] \end{proposition} \begin{proof} By Lemma~\ref{lem_equivariance} and (\ref{eq_exact_breakdown}) we know that \[ \delta^{*}_{PTS} \geq 1- \epsilon^{*}_{PTS}, \] while by Proposition~\ref{pro_exact_fit} we get \[ \delta^{*}_{PTS} \leq \frac{n+p+1}{2n}. \] Therefore by the above \[ \epsilon^{*}_{PTS} \geq 1 - \frac{n+p+1}{2n} = \frac{n+p-1}{2n}. \] \end{proof}
\subsubsection{Efficiency}
Let $(\mathbf{X,y})$ be sample data where the random errors $\mathbf{u}$ as given in the model (\ref{eq_1}) follow the normal distribution, $\mathbf{u} \sim N(0,\sigma^{2})$.
Consider now that $n\rightarrow \infty$. We know from~\cite{RouLer:87} that the variance $\sigma_{LTS}$ obtained from the LTS estimator is consistent (i.e. $\lim_{n\rightarrow \infty} \sigma_{LTS}=\sigma$). Moreover for each observation $i$ we have that the leverage $h_{i}=0$, which means that by the PTS general principle an observation $i$ will be deleted if \[ a_{i}^{2} = r(\boldsymbol{\beta}_{OLS})_{i}^{2} > c \hat{\sigma}^{2} \] Therefore, under Gaussian conditions on the random errors and for large $n$, we can say that the PTS estimator with robust scale $\hat{\sigma} = \sigma_{LTS}$ and cut-off parameter $c\in [2,3]$, deletes a very small fraction of observations as outliers.
\subsection{Penalties Computation}\label{subsec_penalties}
\subsubsection{Robust scale estimate $\hat{\sigma}$}
The key to the success of the PTS estimator is to use a robust scale $\hat{\sigma}$ for evaluating the proper penalties. This robust scale $\hat{\sigma}$ is obtained via the LTS estimator with coverage $k_{LTS}=[(n+p+1)/2]$ as follows. Initially a preliminary error scale $\hat{s}$ is computed \[ \hat{s} = c_{k_{LTS},n} \sqrt{ \frac{1}{k_{LTS}} \sum_{i=1}^{k_{LTS}} r(\boldsymbol{\beta}_{LTS})^{2}_{(i)} }, \] where the constant $c_{k_{LTS},n}$ is so chosen in order to make the scale estimation consistent at the Gaussian model, that is \[ c_{k_{LTS}, n} = 1 / \sqrt{1 - \frac{2n}{k_{LTS} \alpha_{k_{LTS},n} } \Phi\left(\frac{1}{\alpha_{k_{LTS},n}}\right) }, \] where \[ \alpha_{k_{LTS},n} = \frac{1}{\Phi^{-1}(\frac{k_{LTS}+n}{2n})}. \] Then we improve the efficiency results of the LTS by applying a re weighted procedure (see~\cite{RouLer:87}). First the standarized residuals $r(\boldsymbol{\beta}_{LTS})_{i} / \hat{s}$ are computed, and they are used to determine a weight for each observation as follows: \[ w_{i} = \left\{ \begin{array}{rl}
1 & \textrm{if $\frac{r(\boldsymbol{\beta}_{LTS})_{i}}{\hat{s}} |\hat{s}| \leq 2.5$,} \\ 0 & \textrm{otherwise.} \end{array} \right. \] The final robust scale is then given by \begin{equation}\label{eq_error_scale} \hat{\sigma} = \sqrt{\frac{ \sum_{i=1}^{n} w_{i} r(\boldsymbol{\beta}_{LTS})_{i}^{2} }{\sum_{i=1}^{n} w_{i} - p} }. \end{equation}
\subsubsection{Unmasking Multiple High Leverage Outliers}
Based on the PTS principal, an observation is deleted if its adjusted residual is greater than the square root of the penalty cost $\alpha_{i} \ge c\hat{\sigma}$. For $y$-outliers and even for some $\mathbf{x}$-outliers such an approach has successful performance. Unfortunately the masking problem arises when there is a group of high leverage points in the same direction. As it is known, the leverage value $h_{i}$ can be distorted by the presence of a collection of points, which individually have small leverages but collectively form a high leverage group. Pe\~na and Yohai~\cite{PenYoh:99} point out that the individual leverage $h_{i}$ of each point might be small $h_{i}<<1$, whereas the final residual may appear to be very close to $0$, and this is a masking problem. Thus in a set of identical high leverage outliers, the adjusted residual is masked and might be too small $\alpha_{i} << c_{i}\hat{\sigma}$, and as a consequence a masked high leverage outlier may not be deleted.
In order to overcome the distortion caused by the masking problem and unmask the leverage $h_{i}$ of the potential high leverage points, we are based on the Minimum Covariance Determinant (MCD) procedure (Rousseeuw and Van Driessen~\cite{RouDrie:99}). For given coverage $k$, the MCD procedure yields the ``clean'' $\mathbf{X}_{k}$ matrix where $n-k$ vectors $\mathbf{x}_{i}$ corresponding to high leverage observations have been removed from the original matrix $\mathbf{X}$. Thus, for coverage $k$ close to $51\%$ (i.e. $k=[(n+p+1)/2]$), the leverage of each point $(\mathbf{x}_{i},y_{i}), \; i=k+1,\ldots,n$ as it joins the clean data subset taken from MCD is \[ h_{i}^{*} = \mathbf{x}_{i}^{T}(\mathbf{X}_{k,i}^{T}\mathbf{X}_{k,i})^{-1}\mathbf{x}_{i}, \;\; \text{for} \;\; i=k+1,\ldots,n, \] where $\mathbf{X}_{k,i}$ is the matrix $\mathbf{X}_{k}$ plus the row $\mathbf{x}_{i}$. In the above $h_{i}^{*}$ is the unmasked leverage of each one of the potential high leverage points which can be considered as the leverage at the clean data set yielded from the MCD procedure. For the rest of the observations $(\mathbf{x}_{i},y_{i})$ the new leverages now are \[ h_{i}^{*} = \mathbf{x}_{i}^{T} (\mathbf{X}_{k}^{T}\mathbf{X}_{k})^{-1}\mathbf{x}_{i}, \;\; \text{for} \;\; i=1,\ldots,k. \] Given that the high leverage identical outliers have been removed from the clean matrix $\mathbf{X}_{k}$ due to the MCD procedure, the new leverage values $h_{i}^{*}$ of the masked points will appear larger than the original $h_{i}$. Thus, in order to overcome the masking problem we transform the adjusted residuals as follows: \begin{equation}\label{eq_40} \alpha_{i}^{*}=\frac{r_{i}}{\sqrt{1-h_{i}}} \cdot \frac{\sqrt{1-h_{i}}}{\sqrt{1-h_{i}^{*}}} = \frac{r_{i}}{\sqrt{1-h_{i}^{*}}}. \end{equation} Based on the reliable results of the MCD procedure, the new robust adjusted residual in (\ref{eq_40}) is a good diagnostic criterion for all observations even for the multiple high leverage outliers. Incorporating the new robust adjusted residual now in the PTS estimator, we can see how the penalties get individualised for each observation since an observation now will be considered as an outlier if \[ \alpha_{i}^{*} > (c\hat{\sigma}) \Rightarrow r_{i} > c \sqrt{1-h_{i}^{*}} \hat{\sigma}, \] that is the penalty for each observation is defined as \begin{equation}\label{eq_penalties} p_{i} := (c \sqrt{1-h_{i}^{*}} \hat{\sigma})^{2} \end{equation} where $c$ is the cut-off parameter, $\hat{\sigma}$ the robust scale while $h_{i}^{*}$ the robust leverage.
Thus, the PTS estimator tends to remove the masked high leverage observations from the data set, unless their residuals are very small. However, according to the above analysis, good high leverage points may also appear as extreme points with large leverages $h_{i}^{*}$ and the yielded adjusted residual threshold may be too small. As a consequence, some of the good high leverage points may be deleted from the data set. This affects only the efficiency of the solution. However, the efficiency of the final estimate is improved by testing each potential outlier one by one in the reinclusion stage of the PTS procedure which is described in the next section.
\subsection{Reinclusion of Observations}
In order to improve the efficiency of the $\hat{\sigma}_{k}$ all observations with small studentized residuals are reincluded into the clean subset of size $k$. We follow the reinclusion test for each of the $n-k$ observations, similarly to Hadi and Simonoff~\cite{HadSim:93} and Pena and Yohai~\cite{PenYoh:99}. All the $n-k$ points deleted initially stage are tested one by one using the studentized predicted error for outlyingness \[ t_{i}=\frac{y_{i}-\mathbf{x}_{i}^{T}\boldsymbol{\beta}_{PTS}}{\hat{\sigma}\sqrt{1+h_{i}}},\;\; k+1\le i \le n, \] where $h_{i}=\mathbf{x}_{i}^{T}[\mathbf{X}_{k}^{T}\mathbf{X}_{k}]^{-1}\mathbf{x}_{i}$. Under normality assumptions, $t_{i}$ follows Student's $t$ distribution with $k$ degrees of freedom. Therefore, every observation $(\mathbf{x}_{i},y_{i})$, is reincluded in the subset of the clean data if $t_{i}\le t_{\alpha/2,k}$. We have found empirically that for small samples, $t_{\alpha/2,k}=2$ works well. The final PTS estimate is the OLS solution of the resulting data set after the reinclusion.
\subsection{Quadratic Programming Formulation}\label{Sec3} The PTS estimator as defined in (\ref{eq_PTS}) can be stated as a Quadratic Mixed Integer Programming (QMIP) problem, by expanding the residuals and adding 0-1 decision variables, as follows: \begin{eqnarray} \label{eq_pts_qmip} \min_{\boldsymbol{\beta}} & & \sum_{i=1}^{n} \left( r_{i}^{2}+\delta_{i}(c\hat{\sigma})^2\right) \nonumber\\ \mbox{s.t.} & & \mathbf{x}_{i}^{T}\boldsymbol{\beta}+r_{i} \geq y_{i}-s_{i}\\
& & \mathbf{x}_{i}^{T}\boldsymbol{\beta}-r_{i} \leq y_{i}+s_{i}\nonumber\\
& & s_{i} \leq \delta_{i} M\nonumber\\
& & \delta_{i}\in \{0,1\} \nonumber\\
& & r_{i},s_{i} \geq 0 \;\;\; i=1,\ldots, n, \nonumber \end{eqnarray} where $s_{i}$ can be regarded as the pulling distance for moving an outlier towards the regression line, $\delta_{i}$ is a decision variable which takes the value of 1 if observation $i$ is to be removed from the clean data and 0 otherwise, and $M$ is an upper bound on the residuals $r_{i},i=1,\ldots,n$. Given any fixed $\boldsymbol{\delta}\in\{0,1\}^{n}$, from the $2^{n}$ possible ones, and using matrix notation problem (\ref{eq_pts_qmip}) becomes \begin{eqnarray} \min_{\boldsymbol{\beta}} & & \mathbf{r}^{T}\mathbf{r}+\boldsymbol{\delta}^{T}\mathbf{p} \nonumber\\ \mbox{s.t.} & & \mathbf{X}\boldsymbol{\beta}+\mathbf{r} \geq \mathbf{y}-\mathbf{s}\nonumber\\
& & \mathbf{X}\boldsymbol{\beta}-\mathbf{r} \leq \mathbf{y}+\mathbf{s}\nonumber\\
& & \mathbf{s} \leq \boldsymbol{\delta}M\nonumber\\
& & \mathbf{r},\mathbf{s} \geq \mathbf{0},\nonumber \end{eqnarray} where $\mathbf{p}=((c\hat{\sigma})^{2},(c\hat{\sigma})^{2},\ldots,(c\hat{\sigma})^{2})^{T}$, $\mathbf{r}=(r_{1},\ldots,r_{n})^{T}$, $\mathbf{s}=(s_{1},\ldots,s_{n})^{T}$, $\mathbf{y}=(y_{1},\ldots,y_{n})^{T}$, $\boldsymbol{\beta}=(\beta_{1},\ldots,\beta_{n})^{T}$ and the matrix $\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{n}]^{T}$. This problem has linear constraints and a convex quadratic objective function, since the Hessian of $\mathbf{r}^{T}\mathbf{r}$ has nonnegative eigenvalues (and it is therefore positive semi-definite). Therefore we have a convex program, which will have a unique global optimum solution according to the Karush-Kuhn-Tucker optimality conditions~\cite{KaKuTu:93}. Considering that there is a finite number of possible $\boldsymbol{\delta}$, we can conclude that under the assumption that the data are in general position, problem (\ref{eq_pts_qmip}) or equivalently (\ref{eq_PTS}) has a unique solution.
Due to the high computational complexity of the resulting QMIP problem the PTS estimator as defined by (\ref{eq_pts_qmip}) cannot be solved exactly in reasonable time, even for moderately sized problem instances (i.e. $n<50$). This is because the computational time required by even the state of the art integer solvers such as CPLEX, grows exponentially with respect to the size of the problem instance. Moreover it is almost certain (unless $P=NP$) that there does not exist an exact polynomial time algorithm that solves (\ref{eq_pts_qmip}). Therefore the need arises for a specialised approximation algorithm, which we present in the next section.
\section{Fast-PTS}\label{sec_fast-pts}
Let us provide a combinatorial formulation of problem~(\ref{eq_pts_qmip}) in order to facilitate the discussion of the algorithm. Observe that $\delta_{i} =1$ implies $r_{i}=0$, while for $\delta_{i}=0$ we have that $r_{i} = r(\boldsymbol \beta)_{i}$ which is the residual error of observation $i$ with respect to the regression vector ${\boldsymbol \beta}$. If we let the set of observations be $O$ we could equivalently formulate problem~(\ref{eq_pts_qmip}) as follows: \begin{eqnarray} \min_{T\subseteq O} & & L(T):=\sum_{i\in T} r({\boldsymbol\beta}_{T})_{i}^{2} + \sum_{i\in O/T} p_{i} \label{eq_pts_comb} \end{eqnarray} where $p_{i}$ is the associated penalty for observation $i$ defined in (\ref{eq_penalties}), and ${\boldsymbol\beta}_{T}$ is the OLS solution for the set $T$ of observations uniquely defined as \begin{equation}\label{eq_normal} {\boldsymbol\beta}_{T} := (\mathbf{X}_{T}^{T}\mathbf{X}_{T})^{-1}\mathbf{X}_{T}^{T}\mathbf{y}_{T}, \end{equation} where $\mathbf{X}_{T}, \mathbf{y}_{T} $ are submatrices of $\mathbf{X}$ and $\mathbf{y}$ respectively, whose rows are indexed by the observations in $T$. The equivalence of problem (\ref{eq_pts_comb}) with (\ref{eq_pts_qmip}) stems from the one-to-one correspondence between ${\boldsymbol\beta}_{T}$ and $T\subseteq O$, as it is defined by the normal equations in (\ref{eq_normal}). The following necessary conditions for optimality can be stated. \begin{proposition}\label{pro_fast-pts} If $T\subseteq O$ is an optimal solution to (\ref{eq_pts_comb}) then \begin{eqnarray*} i) & & r(\boldsymbol{\beta}_{T})_{i}^{2} \leq p_{i}, \;\; \forall i\in T \\ ii)& & r(\boldsymbol{\beta}_{T})_{i}^{2} \geq p_{i}, \;\; \forall i\in O/T \end{eqnarray*} \end{proposition} \begin{proof} For the case {\em i)} let us define the set \[ S_{1} := \{ i\in T : r(\boldsymbol{\beta}_{T})_{i}^{2} > p_{i}\} \] and assume that $S_{1}\neq \emptyset$. Then \begin{eqnarray*} L(T)=\sum_{i\in T} r(\boldsymbol{\beta}_{T})_{i}^{2}+\sum_{i\in O/T} p_{i} & > & \sum_{i\in T/S_{1}} r(\boldsymbol{\beta}_{T})_{i}^{2}+\sum_{i\in O/\{T/S_{1}\}} p_{i} \\
& \geq & \sum_{i\in T/S_{1}} r(\boldsymbol{\beta}_{T/S_{1}})_{i}^{2} + \sum_{i\in O/\{T/S_{1}\} } p_{i} \end{eqnarray*} therefore we have found a set $T/S_{1}$ were $L(T) > L(T/S_{1})$ which is a contradiction to the hypothesis. For the case {\em ii)} analogously define the set \[ S_{2} := \{ i\in O/T : r(\boldsymbol{\beta}_{T})_{i}^{2} < p_{i}\} \] and assume that $S_{2}\neq \emptyset$. Then \begin{eqnarray*} L(T)=\sum_{i\in T} r(\boldsymbol{\beta}_{T})_{i}^{2}+\sum_{i\in O/T} p_{i} & > & \sum_{i\in T} r(\boldsymbol{\beta}_{T})_{i}^{2}+ \sum_{i\in S_{2}} r(\boldsymbol{\beta}_{T})_{i}^{2}
\sum_{i\in O/ \{T\cup S_{2}\}} p_{i} \\
& \geq & \sum_{i\in T\cup S_{2}} r(\boldsymbol{\beta}_{T\cup S_{2}})_{i}^{2} + \sum_{i\in O/ \{T\cup S_{2}\}} p_{i} \end{eqnarray*} which implies that $L(T) > L(T\cup S_{2})$ and again it contradicts the original hypothesis of the optimality of $T$. Therefore $S_{1}=S_{2}=\emptyset$ and the proposition is proved. \end{proof} It can be easily verified by a counterexample that the above conditions are not sufficient. Nevertheless the number of local optima which they characterise seems to be much smaller than that characterised by more classic $k$-exchange neighbourhoods which grow exponentially with respect to the size of the associated QMIP. Specifically, if we define the {\em $k$-exchange neighbourhood} of a feasible solution $T$ to (\ref{eq_pts_comb}) as \begin{equation*}
N(T)_{k} := \{ S\subseteq O: |T\Delta S| \leq k \} \end{equation*} where $\Delta$ stands for the symmetric difference of sets, then we have the trivial necessary conditions for optimality \begin{equation}\label{eq-kxnge} T \;\;\mbox{optimal to (\ref{eq_pts_comb})} \;\;\Rightarrow \;\; L(T) \leq L(S), \;\forall S\in N(T)_{k}. \end{equation} A local search algorithm based on (\ref{eq-kxnge}) will have to have to search the $k$ neighbourhood of a feasible solution entirely, which will require an exponential number of steps on $k$ with either a breadth-first or a depth-first search strategy. Therefore even for small $k=1,2$ this type of local search is computationally very expensive.
The first necessary optimality condition of Proposition~\ref{pro_fast-pts} is used to construct a feasible solution in a randomised fashion, while the second is used to obtain the local optimum in the region defined by this solution.
\subsection{The Algorithm} \begin{figure}
\caption{The Fast-PTS algorithm}
\label{fig_fast-pts}
\end{figure} The Fast-PTS algorithm is presented in pseudocode in Figure~\ref{fig_fast-pts}. It accepts as inputs the problem instance data, and the parameters {\tt MaxIter} and {\tt RandomSeed} which specify the maximum number of iterations and a seed for the random number generator respectively. Initially the best solution is $T_{min}$ is set to be all the observations, while this solution is updated in line 5 of Figure~\ref{fig_fast-pts}. Each iteration of the Fast-PTS algorithm is composed of two phases, a construction phase where a good solution is built in a greedy randomised fashion starting from an empty solution, and an improvement phase where the solution from the previous phase is improved to ensure local optimality. At the end of each iteration an approximate solution to (\ref{eq_pts_comb}) is provided. The maximum number of iterations is provided by the user, and the best solution found among all iterations is returned by the algorithm as the approximation to the optimum solution. This randomised procedure for constructing a solution before performing local search, has substantial experimental evidence of good performance for NP-Hard combinatorial optimisation problems (see~\cite{PitRes02a}) since it helps the algorithm to examine a wider area of the feasible space without getting entrapped in local optima.
The construction phase of the algorithm is shown in Figure~\ref{fig_construction}. Let us call a partial solution of (\ref{eq_pts_comb}) say $T\subseteq O$ {\em penalty free} if \[ r(\boldsymbol{\beta}_{T})_{i}^{2} < p_{i}, \;\; \forall i\in T. \] In the construction phase of the algorithm, initially a partial solution $T\subset O$ is constructed by choosing at random $(p+1)$ observations such that $T$ is penalty free (line 1 in Figure~\ref{fig_construction}). Note that any $p$ observations are penalty free since they have zero residuals i.e. we have an exact fit. Then this solution is enlarged in a greedy randomised fashion maintaining its penalty free property. Given any partial solution $T$ the following set of possible candidates is defined \[ C(T) := \{ j\in O/T : r(\boldsymbol{\beta}_{T\cup j})_{i}^{2} < p_{i}, \;\;\forall i\in T\cup j \} \] The elements of $C(T)$ are then sorted in ascending order according to their objective function values as given in (\ref{eq_21}), that is \[
L(T\cup j_{1}) \leq L(T\cup j_{2}) \leq \cdots \leq L(T\cup j_{|C(T)|}), \]
and one observation $s$ is chosen randomly among the first $\max\{1, \alpha |C(T)|\}$ candidates, and the partial solution is updated (i.e. $T := T \cup \{s\}$). The parameter $\alpha \in [0,1]$ controls the degree of greediness versus randomness in the construction of the solution. If $\alpha=1$ the the algorithm constructs the solution in a randomised fashion while if $\alpha=0$ the solution is purely greedy. This procedure is repeated until we reach a partial solution $T$ with $C(T) = \emptyset$, where the construction phase ends.
\begin{figure}
\caption{The {\tt Construction} procedure of the Fast-PTS algorithm}
\label{fig_construction}
\end{figure} In lines 2-16 of the {\tt Construction} procedure shown in Figure~\ref{fig_construction} the main iterations upon which a solution $T_{c}$ is constructed, are shown. The iterations in lines 4-10 compute the set $C(T_{c})$ of candidate elements to be added in the solution, where the penalty free property is maintained in line 5. The cost as well as the index of the associated candidate element are added into a heap in line 8, which will be later used for sorting these values. Finally in lines 11-15, an element $j_{c}$ is chosen at random among the best candidates, and added into our current solution. The procedure {\tt outheap()} returns the index $j$ of the observation with the smallest $L(T_{c}\cup j)$ value as computed in and inserted into the heap in lines 7 and 8.
The solution $T_{c}$ computed by the construction phase is penalty free, and we also know that there is no $j\in O/T_{c}$ such that $T_{c}\cup \{j\}$ is penalty free with respect to $\boldsymbol{\beta}_{T_{c}\cup \{j\}}$. However there may be an observation $j\in O/T_{c}$ such that its residual with respect to $\boldsymbol{\beta}_{T_{c}}$ is less than its respective penalty. That is, the necessary optimality conditions stated in Proposition \ref{pro_fast-pts} may not be satisfied. Therefore further improvement of the solution provided by the construction phase could be achieved. This is performed in the improvement phase of the Fast-PTS algorithm, where the solution by the construction phase is changed to satisfy both necessary optimality conditions. Given a solution $T_1$ and its associated regressor vector $\boldsymbol{\beta}_{T_1}$, a new solution $T_{2}$ is computed as \[ T_{2} := \{ i\in O : r(\boldsymbol{\beta}_{T_{1}})_{i}^{2} < p_{i} \}, \] and this is repeated until $T_{1} = T_{2}$. As it is shown in the proof of Proposition ~\ref{pro_fast-pts}, at each iteration $L(T_{1}) \geq L(T_{2})$ therefore the procedure will terminate in a finite number of steps (usually the convergence is in the order of 4 to 5 steps). This simple local search procedure is depicted in Figure~\ref{fig_local_search}. \begin{figure}
\caption{The {\tt LocalSearch} procedure of the Fast-PTS algorithm}
\label{fig_local_search}
\end{figure}
Overall the computational complexity of the Fast-PTS algorithm is polynomial on the size of the problem, since for a fixed number of iterations, each iteration requires a polynomial number of solutions to an OLS problem, which is itself polynomially solvable.
\section{Computational Results}\label{sec_comp}
In this section we present the results of the computational experiments performed in order to verify the performance of the Fast-PTS algorithm, and the overall robust behaviour of the proposed estimator. First we compare the solutions found by the Fast-PTS algorithm for the QMIP problem (\ref{eq_pts_qmip}) with the exact solutions found by ILOG's CPLEX software, on a set of artificial problem instances. Then we compare the robust behaviour of the algorithm with two of the most succesfull algorithms in robust regression i.e. the Fast-LTS algorithm as implemented by Rousseeuw and Van Driessen~\cite{RouDrie:06} and the Fast-S algorithm as presented by Barrera and Yohai~\cite{BarYoh:06}. These comparisons were performed on the following sets of instances: \begin{itemize} \item a set of ``benchmark'' data sets to regression outlier problems widely known in the literature, \item a set of Monte Carlo generated data sets, proposed by Barrera and Yohai in~\cite{BarYoh:06}, \end{itemize}
The Fast-PTS algorithm was implemented on Intel's Fortran Compiler version 10.1 for Linux with the optimisation flags {\tt -ipo, -O3, -no-prec-div, -static}, and ILOG's CPLEX integer programming solver version 9.1 was used. All computational experiments where performed on a Intel Core Duo 2.4GHz computer with 2GB of memory running openSUSE Linux 10.3\footnote{The code is available upon request at \tt{[email protected]}}.
\subsection{Exact Solutions}\label{subsec_exact} In Table~\ref{table_0} we can see the performance of the Fast-PTS algorithm with respect to exact solutions for 8 problem instances. In each instance the problem was solved exactly by the CPLEX mixed integer quadratic programming solver, and the Fast-PTS obtained the exact solution for all instances. The number of maximum iterations was set to 100, although in all cases the optimal solution was obtained in fewer iterations. As we can see from Table~\ref{table_0}, the computational effort required for the exact solution grows exponentially with respect to the size of the instance, while the Fast-PTS algorithm requires a fraction of a second. Similar comparison would be desirable for larger size instances such as the ones presented in section~\ref{subsec_monte_carlo}. This however is infeasible since the required CPU time would be excessively large for the exact solution, and the size of the branch and cut tree generated by CPLEX exceeds the available memory capacity. Specifically for the problem set in Table~\ref{table_0}, for instances with $n\geq 128$, CPLEX required larger memory for the tree and terminated the solution process. \vspace*{5mm} \begin{table}[htb] \centering {\footnotesize \begin{tabular}{lccccc}\hline
& & & Exact & CPLEX & Fast-PTS \\
data set & $n$ & $p$ & solution & (CPU seconds) & (CPU seconds) \\ \hline
PTS1 & 48 & 2 & 12642.9 & 6.05 & 0.038 \\
PTS2 & 58 & 2 & 25910.7 & 5.68 & 0.048 \\
PTS3 & 68 & 2 & 17658.1 & 146.4 & 0.064 \\
PTS4 & 78 & 2 & 26158.8 & 42.3 & 0.076 \\
PTS5 & 88 & 2 & 37951.5 & 1368 & 0.096 \\
PTS6 & 98 & 2 & 41289.2 & 2256 & 0.092 \\
PTS7 & 108 & 2 & 48519.3 & 32055 & 0.132 \\
PTS8 & 118 & 2 & 52829.8 & 37948 & 0.152 \\ \hline \end{tabular} } \caption{Comparison of Fast-PTS with CPLEX}\label{table_0} \end{table} \vspace*{5mm}
\subsection{Benchmark Instances}\label{subsec_benchmark}
Five widely used benchmark instances were examined in this experimental study. The first four are taken from Rousseeuw and Leroy~\cite{RouLer:87} and have been studied by many statisticians in robust literature. The last one is an artificial example from Hadi and Simonoff~\cite{HadSim:93}. Table~\ref{table_1} gives the corresponding results for the PTS estimator, indicating the name of the data set, its dimension and number of observations, the included outliers, the percentage of outliers that have been identified and the running times in seconds. The penalties for deleting outliers are $(2\sqrt{1-h_{i}^{*}}\cdot\hat{\sigma})^2$, as have been proposed in section~\ref{Sec2}. These results are similar to those reported in the literature for other high breakdown estimators and they indicate that the proposed method PTS behaves reliably on these test sets.
\textbf{Telephone Data.} The first data set contains the total number of telephone calls made in Belgium during the years 1950-1973. During the period 1964-1969, (cases 15-20), another recording system was used, hence these observations are unusually high while cases 14 and 21 are marginal. The outliers draw the OLS regression line upwards, masking the true outliers, while swamping in the clean cases 22-24 as too low. The high breakdown estimators Fast-LTS and Fast-S correctly flags the outliers. Also, our PTS estimator correctly identify the true outliers.
\textbf{Hertzsprung-Russell Stars Data.} This data set contains 47 stars in the direction of Cygnus, where $x$ is the logarithm of the effective temperature at the surface of a star and $y$ is the logarithm of its light intensity. Four stars, called giants, have low temperature with high light intensity. These outliers are cases 11, 20, 30 and 34 which can be clearly seen by a scatter plot. The three high breakdown methods identify successfully the outliers.
\textbf{Modified Wood Gravity Data.} This is a real data set with five independent variables. It consists of 20 cases; some of them were replaced by Rousseeuw~\cite{Rousseeuw:84} to contaminate the data by few outliers, namely cases 4, 6, 8 and 19. Once again, all three methods manage to reveal the outliers.
\textbf{Hawkins, Bradu and Kass Data.} The data have been generated by Hawkins et. al~\cite{HawBraKas:84} for illustrating the merits of a robust technique. This artificial data set offers the advantage that at least the position of the good or bad leverage points is known. The Hawkins, Bradu and Kass data consists of 75 observations in four dimensions. The first ten observations form a group of identical bad leverage points, the next four points are good leverage while the remaining are good data. The problem in this case is to fit a hyperplane to the observed data. Plotting the regression residuals from the model obtained from the standard OLS estimator, the bad high-leverage point data are masked and do not show up from the residual plot. Some robust methods not only fail to identify the outliers, but they also swamp in the good cases 11-14. The Fast-LTS and Fast-S estimators correctly flag the outliers. For the PTS estimator, the MCD procedure reveals the first 14 points of this data set as high leverage points. Application of the PTS to these data, starting with robust scale estimate about $\hat{\sigma}=0.61$ and downweighting the penalty cost with penalties from (\ref{eq_penalties}), rejects only the first 10 points as outliers, which are known as the bad leverage points.
\textbf{Hadi Data.} These data have been created by Hadi and Simonoff~\cite{HadSim:93}, in order to illustrate the performance of various robust methods in outlier identification. The two predictors were originally created as uniform $(0, 15)$ and were then transformed to have a correlation of $0.5$. The depended variable was then created to be consistent with the model $y=x_{1}+x_{2}+u$ with $u \sim N(0, 1)$. The first $3$ cases $(1-3)$ were contaminated to have predictor values around $(15, 15)$, and to satisfy $y=x_{1}+x_{2}+4$. Scatterplots or diagnostics have failed to detect the outliers. Many identification methods fail to identify the three outliers. Some bounded influence estimates have largest absolute residual at the clean case $17$, indicating potential swamping. The efficient high breakdown methods Fast-LTS and Fast-S do identify the three outliers as the most outlying cases in the sample, but the residuals are too small to be considered significantly outliers. In contrast, robust methods proposed by Hadi and Simonoff~\cite{HadSim:93} and Fast-PTS estimator identify correctly the clean set $4-25$, with each of the cases $1-3$ having residuals greater than $3.78$.
\begin{table}[htb] \centering {\footnotesize \begin{tabular}{lcclcc}\hline
& & & & \%outliers & time \\
data set & $n$ & $p$ & outliers & identified & (CPU seconds) \\ \hline
Telephone & 24 & 2 &15,16,17,18,19,20& 100 & 0.05 \\
Stars & 47 & 2 &11,20,30,34 & 100 & 0.18 \\
Wood & 20 & 6 &4,6,8,19 & 100 & 0.06 \\
Hawkins & 75 & 4 &1,2,3,4,5,6,7,8,9,10&100 & 0.50 \\
Hadi & 25 & 3 &1,2,3 & 100 & 0.08 \\ \hline \end{tabular} } \caption{Performance of the Fast-PTS algorithm on some small data sets.}\label{table_1} \end{table}
\subsection{Monte Carlo Simulation}\label{subsec_monte_carlo}
To explore further the properties of the PTS method, we have performed an extensive set of simulation experiments for larger sample sizes and observation dimensions taken from Barrera and Yohai~\cite{BarYoh:06}. The experiments compare the performance of our Fast-PTS algorithm with the results of Fast-LTS and Fast-S algorithms.
Barrera and Yohai~\cite{BarYoh:06} considered a model as in (\ref{eq_1}) with $\mathbf{x}=(1,x_{1},x_{2},\ldots,x_{p})$, where a proportion $(1-\epsilon)$ in the data samples $\mathbf{x}=(x_{1},x_{2},\ldots,x_{p},y)$ follow a multivariate normal distribution. Without loss of generality the mean vector is taken equal to 0 and the identity as the covariance matrix. These observations follow the model in (\ref{eq_1}) with $\boldsymbol{\beta}=0$. The contaminated observations are high leverage outliers with $\mathbf{x}=(1,100,0,\ldots,0)$ and $y=\text{slope}\times 100$, where the slope of contamination takes values from 0.9 to 2.0.
Under high leverage contaminations, the objective functions of the three high breakdown estimators typically have two distinct types of local minimum, that is one close to the true value of the regression parameter and a second one close to the slope of the outliers.
\begin{table}[htb] \centering {\footnotesize \begin{tabular}{lcccccccccccc}\hline & \multicolumn{12}{c}{slope} \\
method & 0.9 & 1.0 & 1.1 & 1.2 & 1.3 & 1.4 & 1.5 & 1.6 & 1.7 & 1.8 & 1.9 & 2.0 \\ \hline
Fast-PTS & 58 & 44 & 21 & 8 & 3 & 2 & 1 & 1 & 1 & 0 & 0 & 0 \\
Fast-S & 88 & 60 & 29 & 8 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
Fast-LTS & 100 & 100 & 95 & 86 & 67 & 42 & 21 & 9 & 4 & 1 & 0 & 0 \\ \hline \end{tabular} } \caption{Percentage of samples where convergence occured to the wrong local minimum. $n=400$, $p=36$ and $10\%$ of outliers located at $(\mathbf{x}_{1},\ldots,\mathbf{x}_{36},y)=(1,100,0,\ldots,0,slope\times 100)$.}\label{table_2} \end{table}
Two measures of performance are used in this Monte Carlo study to compare the three estimators: \begin{itemize} \item The percentage of samples for which each algorithm converged to a wrong local minimum i.e. a local minimum close to the slope of the outliers. \item The mean square error (MSE) of the parameter estimate $\hat{\boldsymbol{\beta}}$. \end{itemize}
Several different values of the slope of contamination were used to determine the behaviour of the three algorithms. Its range varies from 0.9 to 2.0 with sample sizes of $n=400$ and $p=36$. The proportion of outliers $\epsilon$ was set to $10\%$.
Tables~\ref{table_2} and~\ref{table_3} contain the percentage of convergence to a wrong local minimum and the MSEs respectively. From Table~\ref{table_2} we observe that the Fast-PTS has the lowest wrong convergence with slope between $0.9$ and $1.1$. For larger slope the results are comparable for all estimators. In Table~\ref{table_3} it is shown that the Fast-PTS has the lowest MSE for the contaminated data with slope between $0.9$ and $1.9$. For the rest of the data, the MSE of the Fast-PTS is comparable with that of Fast-LTS and Fast-S. The main conclusion from Tables~\ref{table_2} and ~\ref{table_3} is that for contamination with small slope, the Fast-PTS perform better than Fast-S and Fast-LTS in percentage of wrong convergence. However, for larger slopes, the PTS has important improvement in the MSEs.
\begin{table}[htb] \centering {\footnotesize \begin{tabular}{lcccccccccccc}\hline & \multicolumn{12}{c}{slope} \\
method & 0.9 & 1.0 & 1.1 & 1.2 & 1.3 & 1.4 & 1.5 & 1.6 & 1.7 & 1.8 & 1.9 & 2.0 \\ \hline
Fast-PTS & 1.54 & 1.30 & 0.82 & 0.48 & 0.32 & 0.29 & 0.36 & 0.27 & 0.31 & 0.30 & 0.33 & 0.48 \\
Fast-S & 1.65 & 1.40 & 0.98 & 0.62 & 0.48 & 0.47 & 0.45 & 0.45 & 0.47 & 0.46 & 0.46 & 0.46 \\
Fast-LTS & 1.73 & 1.46 & 1.39 & 1.34 & 1.08 & 0.92 & 0.74 & 0.58 & 0.51 & 0.47 & 0.46 & 0.46 \\ \hline \end{tabular} } \caption{Mean Squared Errors. $n=400$, $p=36$ and $10\%$ of outliers located at $(\mathbf{x}_{1},\ldots,\mathbf{x}_{36},y)=(1,100,0,\ldots,0,slope\times 100)$.} \label{table_3} \end{table}
To explore further the comparison of the three estimators, we use again the same data as Barrera and Yohai~\cite{BarYoh:06}, with different values of $n$, $p$, considering only one value of the contamination slope, $y=100\times 1$ and $\epsilon=10\%$. Tables~\ref{table_4} and ~\ref{table_5} show the average time needed until convergence of the algorithm, the percentage of samples where the algorithm converged to a wrong local minimum (``$\%$wrong'') and the mean squared error (``MSE'' ) for the three estimators. From Table~\ref{table_4} we observe that the Fast-PTS performs significantly better in both performance criteria especially when the data set is of size $n=100$ or $n=500$.
\begin{table}[htb] \centering {\footnotesize
\begin{tabular}{lc|ccc|cc|cc}\hline
& \multicolumn{1}{c}{} & \multicolumn{3}{c}{Fast-PTS} & \multicolumn{2}{c}{Fast-S} & \multicolumn{2}{c}{Fast-LTS} \\
$n$ & \multicolumn{1}{c}{$p$} & CPU time & $\%$wrong & \multicolumn{1}{c}{MSE} & $\%$wrong & \multicolumn{1}{c}{MSE} & $\%$wrong & MSE \\ \hline
100 & 2 & 0.08 & 0.2 & 0.030 & 30.6 & 0.370 & 50.8 & 0.692 \\
& 3 & 0.09 & 0.2 & 0.052 & 37.2 & 0.506 & 56.0 & 0.890 \\
& 5 & 0.12 & 0.2 & 0.099 & 46.8 & 0.773 & 64.8 & 1.193 \\ \hline
500 & 5 & 2.28 & 0.0 & 0.014 & 15.4 & 0.193 & 61.4 & 0.803 \\
& 10 & 3.78 & 0.0 & 0.029 & 19.8 & 0.292 & 65.8 & 0.935 \\
& 20 & 8.31 & 0.2 & 0.069 & 30.4 & 0.547 & 79.4 & 1.207 \\ \hline
1000 & 5 & 8.75 & 0.0 & 0.007 & 7.8 & 0.095 & 61.2 & 0.715 \\
& 10 & 14.41 & 0.0 & 0.014 & 5.4 & 0.089 & 69.4 & 0.859 \\
& 20 & 31.11 & 0.0 & 0.028 & 12.4 & 0.208 & 82.4 & 1.158 \\ \hline \end{tabular} } \caption{Samples with $10\%$ of outliers located at $(\mathbf{x}_{1},\ldots,\mathbf{x}_{p},y)=(1,100,0,\ldots,0,100)$.}\label{table_4} \end{table}
For heavier contamination $\epsilon=0.20$ and slope 2.2, Table~\ref{table_5} shows the results of these simulations. As in Table~\ref{table_4} we observe that the Fast-PTS has the best overall performance especially for the small data sets $n=100$. However, for larger data sets ($n\ge 500$, the Fast-PTS and Fast-S perform comparable, since the outliers are easier to detect by both robust estimators due to the large slope value.
\begin{table}[htb] \centering {\footnotesize
\begin{tabular}{lc|ccc|cc|cc}\hline
& \multicolumn{1}{c}{} & \multicolumn{3}{c}{Fast-PTS} & \multicolumn{2}{c}{Fast-S} & \multicolumn{2}{c}{Fast-LTS} \\
$n$ & \multicolumn{1}{c}{$p$} & CPU time & $\%$wrong & \multicolumn{1}{c}{MSE} & $\%$wrong & \multicolumn{1}{c}{MSE} & $\%$wrong & MSE \\ \hline
100 & 2 & 0.08 & 0.2 &0.032 & 10.0 & 0.543& 40.9 &2.333 \\
& 3 & 0.09 & 0.0 &0.056 & 15.6 & 0.947& 51.6 &3.218 \\
& 5 & 0.11 & 0.8 &0.109 & 34.0 & 2.303& 70.2 &5.241 \\ \hline
500 & 5 & 2.28 & 0.0 &0.016 & 0.0 & 0.025& 23.4 &1.236 \\
& 10 & 3.74 & 0.0 &0.032 & 0.0 & 0.076& 33.8 &2.102 \\
& 20 & 7.82 & 11.2 &0.873 & 9.2 & 0.734& 72.2 &4.276 \\ \hline
1000 & 5 & 9.40 & 0.0 &0.008 & 0.0 & 0.012& 11.0 &0.542 \\
& 10 & 14.94 & 0.0 &0.016 & 0.0 & 0.025& 20.4 &1.205 \\
& 20 & 31.63 & 0.0 &0.030 & 0.0 & 0.055& 49.8 &2.980 \\ \hline \end{tabular} } \caption{Samples with $20\%$ of outliers located at $(\mathbf{x}_{1},\ldots,\mathbf{x}_{p},y)=(1,100,0,\ldots,0,220)$.} \label{table_5} \end{table}
To continue the implementation of the Fast-PTS algorithm and compare it with the well-known methods discussed in this article, we perform extra Monte Carlo experiments to evaluate the performance of our robust procedure. To carry out one simulation run, we proceeded as follows. The distributions of independent variables and errors and the values of parameters are given. The observations $y_{i}$, were obtained following the regression model as in (\ref{eq_1}) with $p=3$ and $n=50$, where the coefficient values are $\beta_{1}=1.20$, $\beta_{2}=-0.80$ and a zero constant term $\beta_{0}=0.0$. We prefer the Gauss distribution for the iid error term $u \sim N(0, \sigma^{2}=16^{2})$, while $\mathbf{x}_{1}$ and ${x}_{2}$ are iid values drawn also from normal distributions $N(\mu=20, \sigma^{2}=6^{2})$ and $N(\mu=30, \sigma^{2}=8^{2})$ respectively. We consider that the sample may contain three types of outliers, regression outliers (``bad'' high-leverage points), ``good'' high-leverage points, and response outliers ($y$-outliers). An extra value is drawn from the uniform distribution $U(a=80, b=220)$ and for the regression outlier is added to $x_{1}$ or $x_{2}$, for the ``good'' leverage point is added to $x_{1}$ or $x_{2}$ but the value of the dependent variable $y$ follows their contamination, according to the above regression model, for the response outlier is added to $y$. All simulation results are based on $150$ replications enough to obtain a relative error $<10\%$ with a reasonable confidence level of at least $90\%$ for all the simulation estimates. The robust scale estimate $\hat{\sigma}$ from LTS with coverage $k=28$ is used throughout the simulation study.
Tables~\ref{table_6} and ~\ref{table_7} display the results concerning the performance of the three robust estimators corresponding to the following cases: Table~\ref{table_6} is based on data contaminated by ``bad'' and ``good'' high leverage points whereas Table~\ref{table_7} is based on data contaminated by ``bad'' high leverage outliers (heavier contamination).
\begin{table}[htb] \centering {\footnotesize \begin{tabular}{lccc}\hline
& Fast-PTS & Fast-S & Fast-LTS \\ \hline
\%wrong & 2.7 & 13.3 & 16.0 \\
mean estimate of $\hat{\beta}_{0}$ & -1.796 & -1.633 & -2.782 \\
mean estimate of $\hat{\beta}_{1}$ & 1.176 & 1.129 & 1.147 \\
mean estimate of $\hat{\beta}_{2}$ & -0.766 &-0.758 & -0.732 \\
mean squared error of $\hat{\beta_{0}}$ & 35.38 & 58.41 & 112.21 \\
mean squared error of $\hat{\beta_{1}}$ & 0.036 & 0.106 & 0.132 \\
mean squared error of $\hat{\beta_{2}}$ & 0.015 & 0.030 & 0.049 \\
computation time (secs) & 0.56 & 0.29 & 0.18 \\ \hline \end{tabular} } \caption{Samples with $32\%$ of outliers ($x$-outliers=6, ``good'' leverage points=4, $y$-outliers=6), $n=50$, $p=3$. True: $\beta_{0}=0.0$, $\beta_{1}=1.20$, $\beta_{2}=-0.80$.}\label{table_6} \end{table}
\begin{table}[htb] \centering {\footnotesize \begin{tabular}{lccc}\hline
& Fast-PTS & Fast-S & Fast-LTS \\ \hline
\%wrong & 26.0 & 61.3 & 50.0 \\
mean estimate of $\hat{\beta}_{0}$ & -3.955 & -1.607 & -2.123 \\
mean estimate of $\hat{\beta}_{1}$ & 1.131 & 0.814 & 0.910 \\
mean estimate of $\hat{\beta}_{2}$ & -0.646 &-0.569 & -0.600 \\
mean squared error of $\hat{\beta_{0}}$ & 84.41 & 158.25 & 173.98 \\
mean squared error of $\hat{\beta_{1}}$ & 0.088 & 0.473 & 0.390 \\
mean squared error of $\hat{\beta_{2}}$ & 0.084 & 0.167 & 0.154 \\
CPU time & 0.55 & 0.27 & 0.16 \\ \hline \end{tabular} } \caption{Samples with $32\%$ of outliers ($x$-outliers=10, $y$-outliers=6), $n=50$, $p=3$. True: $\beta_{0}=0.0$, $\beta_{1}=1.20$, $\beta_{2}=-0.80$.} \label{table_7} \end{table}
\section{Conclusions and Future Research}\label{sec_conclusions} The PTS estimator can be considered as a procedure which instead of improving upon previous robust estimators such as LTS and MCD, it combines the information so obtained from them in a penalized manner, so as to simultaneously retain their favorable robust characteristics, while being efficient and identifying multiple masked outliers. Specifically, this is achieved by using a robust scale and leverage from the LTS and MCD respectively. Moreover, an efficient fast algorithm which is based on a set of necessary conditions for local optimality is also presented. Extensive computational experiments on a large set of instances, indicate that the proposed estimator ouperforms other robust estimators in the presence of all possible types of outliers.
Future research can be performed on both the estimator and the algorithm. With respect to the estimator, a different robust scale $\hat{\sigma}$ could be investigated, and also it can be extended to treat data sets which contain a mixture of multilinear regression models. The algorithm could also be modified to handle massive data sets, which may have millions of observations, by utilizing the natural parallel structure of the algorithm.
\end{document}
|
arXiv
|
\begin{document}
\title{The general position number of integer lattices}
\author{ Sandi Klav\v zar $^{a,b,c}$ \and Gregor Rus $^{c,d}$ }
\date{}
\maketitle
\begin{center} $^a$ FMF, University of Ljubljana, Slovenia \\
$^b$ FNM, University of Maribor, Slovenia \\
$^{c}$ IMFM, Ljubljana, Slovenia \\
$^d$ FOV, University of Maribor, Slovenia\\
\end{center}
\begin{abstract} The general position number ${\rm gp}(G)$ of a connected graph $G$ is the cardinality of a largest set $S$ of vertices such that no three pairwise distinct vertices from $S$ lie on a common geodesic. The $n$-dimensional grid graph $P_\infty^n$ is the Cartesian product of $n$ copies of the two-way infinite path $P_\infty$. It is proved that if $n\in {\mathbb N}$, then ${\rm gp}({P_\infty^n}) = 2^{2^{n-1}}$. The result was earlier known only for $n\in \{1,2\}$ and partially for $n=3$. \end{abstract}
\noindent {\bf E-mails}: [email protected], [email protected]
\noindent {\bf Key words}: general position problem; Cartesian product of graphs; integer lattice; Erd\H{o}s-Szekeres theorem
\noindent {\bf AMS Subj. Class.}: 05C12, 05C76, 11B75
\section{Introduction and preliminaries} \label{sec:intro}
Subsets of vertices of (infinite) grids with special properties are of wide interest, the variety~\cite{bouznif-2019, distefano-2017, drews-2019, dyb-2020, gan-2018, guo-2020, jobsen-2018} of such investigations clearly supports this statement. In this note we are interested in largest general position sets of grids. By now only some specific results about these sets in grids were known, here we solve the problem completely.
A set $S$ of vertices of a connected graph $G$ is a {\em general position set} if $d_G(u,v) \ne d_G(u,w) + d_G(w,v)$ holds for every $\{u,v,w\}\in \binom{S}{3}$, where $d_G(x,y)$ denotes the shortest-path distance between $x$ and $y$ in $G$. The {\em general position number} ${\rm gp}(G)$ of $G$ is the cardinality of a largest general position set in $G$. This concept and terminology was introduced in~\cite{manuel-2018a}, in part motivated by the century old Dudeney's No-three-in-line problem~\cite{dudeney-1917}. A couple of years earlier and in different terminology, the problem was also considered in~\cite{ullas-2016}. Moreover, in the special case of hypercubes, the general position problem has been studied back in 1995 by K\"orner~\cite{korner-1995}. Following these seminal papers, the general position problem has been studied from different perspectives in several subsequent papers~\cite{anand-2019, ghorbani-2019, klavzar-2019+, klavzar-2019b, manuel-2018b, patkos-2019+}.
The {\em Cartesian product} $X\,\square\, Y$ of graphs $X$ and $Y$ is the graph with the vertex set $V(X) \times V(Y)$, vertices $(x,y)$ and $(x',y')$ being adjacent if either $x=x'$ and $yy'\in E(Y)$, or $y=y'$ and $xx'\in E(X)$. The Cartesian product $X_1\,\square\, \cdots \,\square\, X_n$, where each factor $X_i$ is isomorphic to $X$, will be shortly denoted by $X^n$. If $P_\infty$ denotes the two-way infinite path, then one of the main results from~\cite{manuel-2018b} asserts that ${\rm gp}(P_\infty^2) = 4$. In the same paper it was also proved that $10\le {\rm gp}(P_\infty^3) \le 16$. The lower bound $10$ was improved to $14$ in~\cite{klavzar-2019b}. In this note we round these investigations by proving the following result.
\begin{theorem} \label{thm:main} If $n\in {\mathbb N}$, then ${\rm gp}(P_\infty^n) = 2^{2^{n-1}}$. \end{theorem}
In the rest of the section we list some further notation and and preliminary results. In the next section we prove the theorem. In the concluding section we give a couple of consequences of the theorem and pose an open problem.
For a positive integer $k$ we will use the notation $[k] = \{1,\ldots, k\}$. Throughout we will set $V(P_\infty) = {\mathbb Z}$, where $u,v\in V(P_\infty)$ are adjacent if and only if $|u-v| = 1$. With this convention we have $V(P_\infty^n) = {\mathbb Z}^n$. If $u\in V(P_\infty^n)$, then for the coordinates of $u$ we will use the notation $u = (u_1, \ldots, u_n)$. If a vertex from $V(P_\infty^n)$ will be indexed, say $u_i\in V(P_\infty^n)$, then this notation will be extended as $u_i = (u_{i,1}, \ldots, u_{i,n})$. From the Distance Lemma~\cite[Lemma 12.2]{imrich-2008} it follows that \begin{equation} \label{eq:distance}
d_{P_\infty^n}(u, v) = \sum_{i=1}^n |u_i - v_i|\,. \end{equation} From here it is not difficult to deduce that a vertex $w\in V(P_\infty^n)$ lies on a shortest $u,v$-path in $P_\infty^n$ if and only if $\min \{u_i,v_i\}\leq w_i\leq \max\{u_i,v_i\}$ holds for every $i\in [n]$.
A sequence of real numbers is {\it monotone} if it is monotonically increasing or monotonically decreasing. The celebrated Erd\H{o}s-Szekeres result on monotone sequences reads as follows (cf.\ also~\cite[Theorem 1.1]{bukh-2014}).
\begin{theorem} {\rm \cite{ErSz35}} \label{thm:Er-Sze} For every $n\ge 2$, every sequence $(a_1,\ldots, a_N)$ of real numbers with $N\ge (n-1)^2 + 1$ elements contains a monotone subsequence of length $n$. \end{theorem}
\section{Proof of Theorem~\ref{thm:main}} \label{sec:proof}
Theorem is obviously true for $n = 1$ and was proved for $n=2$ in~\cite[Corollary 3.2]{manuel-2018b}.
Let now $n\ge 3$ and let $U^{(1)} = \{u_1, \ldots, u_{2^{2^{n-1}} +1}\}$ be a set of vertices of $P_\infty^n$ of cardinality $2^{2^{n-1}} +1$. We may without loss of generality assume that the first coordinates of the vertices from $U^{(1)}$ are ordered, that is, $u_{1,1}\le u_{2,1} \le \cdots \le u_{{2^{2^{n-1}} +1},1}$. By Theorem~\ref{thm:Er-Sze}, there exists a subset $U^{(2)}$ of $U^{(1)}$ of cardinality $2^{2^{n-2}} +1$, such that the second coordinates of the vertices from $U^{(2)}$ form a monotone (sub)sequence. Inductively applying this argument, we arrive at a set $U^{(n)}\subset U^{(n-1)}$ of cardinality $2^{2^{n-n}} +1 = 3$, in which the $n^{\rm th}$ coordinates of the three vertices form a monotone (sub)sequence. As $U^{(n)}\subset U^{(n-1)} \subset \cdots \subset U^{(1)}$, the induction argument yields that for every $i\in [n-1]$, the $i^{\rm th}$ coordinates of the vertices from $U^{(n)}$ likewise form a monotone (sub)sequence. If $U^{(n)} = \{u,v,w\}$, where $u_1 \le v_1 \le w_1$, this implies (having \eqref{eq:distance} in mind) that $v$ lies on a shortest $u,w$-path. We conclude that ${\rm gp}(P_\infty^n) \le 2^{2^{n-1}}$.
To prove the other inequality we are going to inductively construct a general position set $X^{(n)} = \{x_1^{(n)}, \ldots, x_{2^{2^{n-1}}}^{(n)} \}$ for $n\ge 2$ as follows. Set $X^{(2)} = \{(1,2),(2,1),(3,4),(4,3)\}$, where $x_1^{(2)} = (1,2)$, $x_2^{(2)} = (2,1)$, $x_3^{(2)} = (3,4)$, and $x_4^{(2)} = (4,3)$. Suppose now that $X^{(n-1)}$ is defined for some $n\ge 3$, and construct $X^{(n)}$ as follows. Set first \begin{itemize} \item $x_{i,1}^{(n)} = i$, $i\in [2^{2^{n-1}}]$. \end{itemize} Next, write each $i\in [2^{2^{n-1}}]$ as $i = p\cdot 2^{2^{n-2}} + r$, where $0\le p < 2^{2^{n-2}}$ and $r\in [2^{2^{n-2}}]$, and set \begin{itemize} \item $x_{i,j}^{(n)} = \left(x_{(p+1),j}^{(n-1)} - 1\right)\cdot 2^{2^{n-2}} + x_{r,j}^{(n-1)}$ for each $j\in \{2,\ldots, n-1\}$, and \item $x_{i,n}^{(n)} = x^{(n-1)}_{p+1,n-1}\cdot 2^{2^{n-2}} - x^{(n-1)}_{r,n-1}+1\,.$ \end{itemize} Roughly speaking, for the $j^{\rm th}$ coordinate, where $j\in \{2,\ldots, n-1\}$, we partition the sequence $(x^{(n)}_{i,j})_{i=1}^{2^{2^{n-1}}}$ into $2^{2^{n-2}}$ blocks each of $2^{2^{n-2}}$ values and sort the blocks as well as the values inside the blocks according to the values $(x^{(n-1)}_{i,j})_{i=1}^{2^{2^{n-2}}}$. The values of the $n^{\rm th}$ coordinate is then obtained from the values of the $(n-1)^{\rm th}$ coordinate by reversing the sequence in each of the $2^{2^{n-2}}$ blocks, while keeping the sequence of the blocks. For example, the coordinates of the vertices from $X^{(3)}$ are shown in Table~\ref{tab:coordinates}.
\begin{table}[ht]
\centering
\begin{tabular}{|c||c|c|} \hline $i$ & 1 & 2\\ \hline \hline $x^{(1)}_{i,1}$ & 1&2 \\ \hline \end{tabular}
\begin{tabular}{|c||c|c|c|c|} \hline $i$ & 1 & 2 & 3 & 4\\ \hline \hline $x^{(2)}_{i,1}$ & 1&2 & 3 & 4 \\ \hline $x^{(2)}_{i,2}$ & 2 & 1 & 4 & 3 \\ \hline \end{tabular}
\begin{tabular}{|c||c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|} \hline $i$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 \\ \hline\hline $x^{(3)}_{i,1}$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 \\ \hline $x^{(3)}_{i,2}$ & 6 & 5 & 8 & 7 & 2 & 1 & 4 & 3 & 14 & 13 & 16 & 15 & 10 & 9 & 12 & 11 \\ \hline $x^{(3)}_{i,3}$ & 7 & 8 & 5 & 6 & 3 & 4 & 1 & 2 & 15 & 16 & 13 & 14 & 11 & 12 & 9 & 10 \\ \hline
\end{tabular}
\caption{Coordinates of the vertices in sets $X^{(1)}$, $X^{(2)}$, and $X^{(3)}$, respectively.}
\label{tab:coordinates}
\end{table}
To complete the proof it suffices to show that for each $n\ge 2$, the set $X^{(n)}$ forms a general position set of $P_\infty^n$. We proceed by induction on $n$, the base case $n=2$ being clear. Suppose now that $X^{(n-1)}$ is a general position set of $P_\infty^{n-1}$ and consider $X^{(n)}$. Partition the set $[2^{2^{n-1}}]$ into $2^{2^{n-2}}$ {\em blocks} $\{1,\ldots, 2^{2^{n-2}}\}$, $\{2^{2^{n-2}}+1,\ldots, 2^{2^{n-2}+1}\}$, $\ldots$ Let $x_i^{(n)}$, $x_j^{(n)}$, and $x_k^{(n)}$ be pairwise different vertices from $X^{(n)}$ and consider the following three cases, where again write each number $m\in [2^{2^{n-1}}]$ as $m = p_m\cdot 2^{2^{n-2}} + r_m$, where $0\le p_m < 2^{2^{n-2}}$ and $r_m\in [2^{2^{n-2}}]$.
To prove that no three vertices of $X^{(n)}$ lie on a common geodesic, the following claim will be useful.
\noindent {\bf Claim A} If $n\ge 3$ and if $i$ and $j$ are in the same block, then $x^{(n)}_{i,n-1}<x^{(n)}_{j,n-1}$ if and only if $x^{(n)}_{i,n}>x^{(n)}_{j,n}$.
\noindent Let $n\ge 3$ and let $i = p_i \cdot 2^{2^{n-2}} + r_i$ and $j = p_j \cdot 2^{2^{n-2}} + r_j$. Assume that $x^{(n)}_{i,n-1}<x^{(n)}_{j,n-1}$. By the construction, $x^{(n)}_{i,n-1} = \left(x_{(p_i+1),n-1}^{(n-1)} - 1\right)\cdot 2^{2^{n-2}} + x_{r_i,n-1}^{(n-1)}$ and $x^{(n)}_{j,n-1} = \left(x_{(p_j+1),n-1}^{(n-1)} - 1\right)\cdot 2^{2^{n-2}} + x_{r_j,n-1}^{(n-1)}$. Since $i$ and $j$ are in the same block, that is, $p_i=p_j$, we get that $x^{(n)}_{j,n-1}-x^{(n)}_{i,n-1} = x_{r_j,n-1}^{(n-1)} - x_{r_i,n-1}^{(n-1)}$. As we have assume that $x^{(n)}_{i,n-1}<x^{(n)}_{j,n-1}$, it follows that $x_{r_j,n-1}^{(n-1)} > x_{r_i,n-1}^{(n-1)}$. Since $x^{(n)}_{i,n} = \left(x_{(p_i+1),n-1}^{(n-1)} \right)\cdot 2^{2^{n-2}} - x_{r_i,n-1}^{(n-1)} + 1$ and $x^{(n)}_{j,n} = \left(x_{(p_j+1),n-1}^{(n-1)} \right)\cdot 2^{2^{n-2}} - x_{r_j,n-1}^{(n-1)} + 1$, we conclude that $x^{(n)}_{i,n}-x^{(n)}_{j,n} = x_{r_j,n-1}^{(n-1)} - x_{r_i,n-1}^{(n-1)} > 0$. The reverse implication is proved along the same lines. This proves Claim~A.
We now distinguish the following cases.
\noindent
{\bf Case 1}: $|\{p_i, p_j, p_k\}| = 1$. \\ In this case $i$, $j$, and $k$ belong to the same block. Then by the induction hypothesis, the first $n-1$ coordinates assure that $x_i^{(n)}$, $x_j^{(n)}$, and $x_k^{(n)}$ do not lie on a common geodesic.
\noindent
{\bf Case 2}: $|\{p_i, p_j, p_k\}| = 2$.\\ In this case we may assume without loss of generality that $p_i = p_j < p_k$. Further assuming without loss of generality that $r_i < r_j$, we have $x^{(n)}_{i,1} < x^{(n)}_{j,1} < x^{(n)}_{k,1}$. Hence if $x_i^{(n)}$, $x_j^{(n)}$, and $x_k^{(n)}$ lie on a common geodesic, then necessarily $x_j^{(n)}$ lies between $x_i^{(n)}$ and $x_k^{(n)}$. Suppose first that $x^{(n)}_{i,n-1} < x^{(n)}_{j,n-1} < x^{(n)}_{k,n-1}$. Then by Claim~A we have $x^{(n)}_{j,n} < x^{(n)}_{i,n}$. Since $x^{(n)}_{i,n-1} < x^{(n)}_{k,n-1}$ and $i, k$ are in different blocks, we have $x^{(n)}_{i,n} < x^{(n)}_{k,n}$. Therefore, $x^{(n)}_{j,n} < x^{(n)}_{i,n} < x^{(n)}_{k,n}$, so $x_i^{(n)}$, $x_j^{(n)}$, and $x_k^{(n)}$ do not lie on a common geodesic. By a parallel argument we come to the same conclusion if $x^{(n)}_{i,n-1} > x^{(n)}_{j,n-1} > x^{(n)}_{k,n-1}$ holds.
\noindent
{\bf Case 3}: $|\{p_i, p_j, p_k\}| = 3$.\\ Since $p_i$, $p_j$ and $p_k$ are pairwise different, which means then we can consider their blocks. Considering the whole blocks as single contracted vertices, their components are the integral part of dividing components with $2^{2^{n-2}}$ (the $p$ values of the components). Since $p$ values are obtained in the construction from a general position set of the $(n-1)$ dimensional grid, the induction hypothesis assures that these contracted vertices do not lie on a common geodesic, since this contracted vertices are part of a general position set in $P_\infty^{n-1}$. This in turn implies that also $x_i^{(n)}$, $x_j^{(n)}$, and $x_k^{(n)}$ do not lie on a common geodesic.
\section{Concluding remarks} \label{sec:conclude}
Recall that a subgraph $H$ of a graph $G$ is {\em isometric} if $d_H(u,v) = d_G(u,v)$ holds for each pair of vertices $u,v\in V(H)$. Since $P_{i_1}\,\square\, \cdots \,\square\, P_{i_n}$ is an isometric subgraph of $P_\infty^n$, Theorem~\ref{thm:main} immediately implies:
\begin{corollary} \label{cor:finite-grids} If $n\ge 2$, and $i_1, \ldots, i_n\ge 2^{2^{n-1}}$, then ${\rm gp}(P_{i_1}\,\square\, \cdots \,\square\, P_{i_n}) = 2^{2^{n-1}}$. \end{corollary}
More generally, if a graph $G$ contains an isometric grid $P_{i_1}\,\square\, \cdots \,\square\, P_{i_n}$, where each $i_j\ge 2^{2^{n-1}}$, then ${\rm gp}(G) \ge 2^{2^{n-1}}$. For instance:
\begin{corollary} \label{cor:finite-turus} If $n\ge 2$, and $i_1, \ldots, i_n\ge 2^{2^{n-1}+1}$, then ${\rm gp}(C_{i_1}\,\square\, \cdots \,\square\, C_{i_n}) \ge 2^{2^{n-1}}$. \end{corollary}
From~\cite{ghorbani-2019} we know that ${\rm gp}(G\,\square\, H) \geq {\rm gp}(G) + {\rm gp}(H) - 2$ holds for finite, connected graphs $G$ and $H$. Since the general position number of a path is $2$, Corollary~\ref{cor:finite-grids} demonstrates that the difference in the inequality can be arbitrary large.
In~\cite{klavzar-2019b} a formula for the number of general position sets of cardinality $4$ in $P_r\,\square\, P_s$ (that is, of largest general position sets) is determined for each $r, s\ge 2$. Because of this result and Corollary~\ref{cor:finite-grids}, an interesting and intriguing problem is to determine the number of largest general position sets in $P_{i_1}\,\square\, \cdots \,\square\, P_{i_n}$, where $n\ge 3$ and $i_1, \ldots, i_n\ge 2^{2^{n-1}}$.
\end{document}
|
arXiv
|
Engineering Computational Intelligence and Complexity
Advances in Intelligent Systems and Computing
Advanced Computing and Systems for Security
Editors: Chaki, R., Saeed, K., Cortesi, A., Chaki, N. (Eds.)
Discusses recent research trends in the areas of computational intelligence, communications, data mining, and computational models
Presents applications to design, analysis, and modeling for key areas in computational intelligence
Discusses efficient computational algorithms and robust tools for efficient implementation
price for India (gross)
Included format: EPUB, PDF
This book presents extended versions of papers originally presented and discussed at the 3rd International Doctoral Symposium on Applied Computation and Security Systems (ACSS 2016) held from August 12 to 14, 2016 in Kolkata, India. The symposium was jointly organized by the AGH University of Science & Technology, Cracow, Poland; Ca' Foscari University, Venice, Italy; and the University of Calcutta, India.
The book is divided into two volumes, Volumes 3 and 4, and presents dissertation works in the areas of Image Processing, Biometrics-based Authentication, Soft Computing, Data Mining, Next-Generation Networking and Network Security, Remote Healthcare, Communications, Embedded Systems, Software Engineering and Service Engineering. The first two volumes of the book published the works presented at the ACSS 2015, which was held from May 23 to 25, 2015 in Kolkata, India.
Rituparna Chaki is Professor of Information Technology in the University of Calcutta, India. She received her Ph.D. Degree from Jadavpur University in India in 2003. Before this she completed B. Tech. and M. Tech. in Computer Science & Engineering from University of Calcutta in 1995 and 1997 respectively. She has served as a System Executive in the Ministry of Steel, Government of India for nine years, before joining the academics in 2005 as a Reader of Computer Science & Engineering in the West Bengal University of Technology, India. She is with the University of Calcutta since 2013.
Her area of research includes Optical networks, Sensor networks, Mobile ad hoc networks, Internet of Things, data Mining, etc. She has nearly 100 publications to her credit. Dr. Chaki has also served in the program committees of different international conferences. She has been a regular Visiting Professor in the AGH University of Science & Technology, Poland for last few years. Rituparna has co-authored couple of books published by CRC Press, USA.
Agostino Cortesi, Ph.D., is a full professor of Computer Science at Ca' Foscari University, Venice, Italy. He served as Dean of the Computer Science studies, as Department Chair, and as Vice-Rector for quality assessment and institutional affairs.
His main research interests concern programming languages theory, software engineering, and static analysis techniques, with particular emphasis on security applications. He published more than 110 papers in high level international journals and proceedings of international conferences. His h-index is 16 according to Scopus, and 24 according to Google Scholar. Agostino served several times as member (or chair) of program committees of international conferences (e.g., SAS, VMCAI, CSF, CISIM, ACM SAC) and he's in the editorial boards of the journals "Computer Languages, Systems and Structures" and "Journal of Universal Computer Science". Currently, he holds the chairs of "Software Engineering", "Program Analysis and Verification", "Computer Networks and Information Systems" and "Data Programming".
Khalid Saeed is a Full Professor in the Faculty of Computer Science, Bialystok University of Technology, Bialystok, Poland. He received the B.Sc. Degree in Electrical and Electronics Engineering in 1976 from Baghdad University in 1976, the M.Sc. and Ph.D. Degrees from Wroclaw University of Technology, in Poland in 1978 and 1981, respectively. He received his D.Sc. Degree (Habilitation) in Computer Science from Polish Academy of Sciences in Warsaw in 2007. He was a visiting professor of Computer Science with Bialystok University of Technology, where he is now working as a full professor. He was with AGH University of Science and Technology in the years 2008-2014. He is also working as a professor with the Faculty of Mathematics and Information Sciences in Warsaw University of Technology. His areas of interest are Biometrics, Image Analysis and Processing and Computer Information Systems. He has published more than 220 publications, edited 28 books, Journals and Conference Proceedings, 10 text and reference books. He supervised more than 130 M.Sc. and 16 Ph.D. theses. He gave more than 40 invited lectures and keynotes in different conferences and universities in Europe, China, India, South Korea and Japan on Biometrics, Image Analysis and Processing. He received more than 20 academic awards. Khalid Saeed is a member of more than 20 editorial boards of international journals and conferences. He is an IEEE Senior Member and has been selected as IEEE Distinguished Speaker for 2011-2016. Khalid Saeed is the Editor-in-Chief of International Journal of Biometrics with Inderscience Publishers.
Nabendu Chaki> is a Professor in the Department Computer Science & Engineering, University of Calcutta, Kolkata, India. Dr. Chaki did his first graduation in Physics from the legendary Presidency College in Kolkata and then in Computer Science & Engineering from the University of Calcutta. He has completed Ph.D. in 2000 from Jadavpur University, India. He is sharing 6 international patents including 4 U.S. patents with his students. Prof. Chaki has been quite active in developing international standards for Software Engineering and Cloud Computing as a member of Global Directory (GD) member for ISO-IEC. Besides editing more than 25 book volumes, Nabendu has authored 6 text and research books and has more than 150 Scopus Indexed research papers in Journals and International conferences. His areas of research interests include distributed systems, image processing and software engineering. Dr. Chaki has served as a Research Faculty in the Ph.D. program in Software Engineering in U.S. Naval Postgraduate School, Monterey, CA. He is a visiting faculty member for many Universities in India and abroad. Besides being in the editorial board for several international journals, he has also served in the committees of over 50 international conferences. Prof. Chaki is the founder Chair of ACM Professional Chapter in Kolkata.
A Heuristic Framework for Priority Based Nurse Scheduling
Sarkar, Paramita (et al.)
A Divide-and-Conquer Algorithm for All Spanning Tree Generation
Chakraborty, Maumita (et al.)
Circuit Synthesis of Marked Clique Problem using Quantum Walk
Sanyal(Bhaduri), Arpita (et al.)
Abort-Free STM: A Non-blocking Concurrency Control Approach Using Software Transactional Memory
Ghosh, Ammlan (et al.)
Graph Problems Performance Comparison Using Intel Xeon and Intel Xeon-Phi
Hanzelka, Jiří (et al.)
A Novel Image Steganographic Scheme Using $$8 \times 8$$ Sudoku Puzzle
Dey, Debanjali (et al.)
Comparison of K-means Clustering Initialization Approaches with Brute-Force Initialization
Golasowski, Martin (et al.)
Association Based Multi-attribute Analysis to Construct Materialized View
Roy, Santanu (et al.)
A New Method for Key Author Analysis in Research Professionals' Collaboration Network
Bihari, Anand (et al.)
Single-Shot Person Re-identification by Spectral Matching of Symmetry-Driven Local Features
Nanda, Aparajita (et al.)
Analysis of Eavesdropping in QKD with Qutrit Photon States
Banerjee, Supriyo (et al.)
Evaluating the Performance of a Chaos Based Partial Image Encryption Scheme
Som, Sukalyan (et al.)
Keystroke Dynamics and Face Image Fusion as a Method of Identification Accuracy Improvement
Panasiuk, Piotr (et al.)
Download Preface 1 PDF (41.7 KB)
Download Sample pages 2 PDF (261.4 KB)
Download Table of contents PDF (53.9 KB)
Rituparna Chaki
Khalid Saeed
Agostino Cortesi
Nabendu Chaki
Springer Nature Singapore Pte Ltd.
10.1007/978-981-10-3409-1
XIII, 197
57 b/w illustrations
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.